Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
NASA Astrophysics Data System (ADS)
Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.
2017-08-01
Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Dynamic Modeling from Flight Data with Unknown Time Skews
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2016-01-01
A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
NASA Astrophysics Data System (ADS)
Okutani, Iwao; Mitsui, Tatsuro; Nakada, Yusuke
In this paper put forward are neuron-type models, i.e., neural network model, wavelet neuron model and three layered wavelet neuron model(WV3), for estimating traveling time between signalized intersections in order to facilitate adaptive setting of traffic signal parameters such as green time and offset. Model validation tests using simulated data reveal that compared to other models, WV3 model works very fast in learning process and can produce more accurate estimates of travel time. Also, it is exhibited that up-link information obtainable from optical beacons, i.e., travel time observed during the former cycle time in this case, makes a crucial input variable to the models in that there isn't any substantial difference between the change of estimated and simulated travel time with the change of green time or offset when up-link information is employed as input while there appears big discrepancy between them when not employed.
The fossilized birth–death process for coherent calibration of divergence-time estimates
Heath, Tracy A.; Huelsenbeck, John P.; Stadler, Tanja
2014-01-01
Time-calibrated species phylogenies are critical for addressing a wide range of questions in evolutionary biology, such as those that elucidate historical biogeography or uncover patterns of coevolution and diversification. Because molecular sequence data are not informative on absolute time, external data—most commonly, fossil age estimates—are required to calibrate estimates of species divergence dates. For Bayesian divergence time methods, the common practice for calibration using fossil information involves placing arbitrarily chosen parametric distributions on internal nodes, often disregarding most of the information in the fossil record. We introduce the “fossilized birth–death” (FBD) process—a model for calibrating divergence time estimates in a Bayesian framework, explicitly acknowledging that extant species and fossils are part of the same macroevolutionary process. Under this model, absolute node age estimates are calibrated by a single diversification model and arbitrary calibration densities are not necessary. Moreover, the FBD model allows for inclusion of all available fossils. We performed analyses of simulated data and show that node age estimation under the FBD model results in robust and accurate estimates of species divergence times with realistic measures of statistical uncertainty, overcoming major limitations of standard divergence time estimation methods. We used this model to estimate the speciation times for a dataset composed of all living bears, indicating that the genus Ursus diversified in the Late Miocene to Middle Pliocene. PMID:25009181
NASA Astrophysics Data System (ADS)
Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan
2016-02-01
Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.
Estimating Real-Time Zenith Tropospheric Delay over Africa Using IGS-RTS Products
NASA Astrophysics Data System (ADS)
Abdelazeem, M.
2017-12-01
Zenith Tropospheric Delay (ZTD) is a crucial parameter for atmospheric modeling, severe weather monitoring and forecasting applications. Currently, the international global navigation satellite system (GNSS) real-time service (IGS-RTS) products are used extensively in real-time atmospheric modeling applications. The objective of this study is to develop a real time zenith tropospheric delay estimation model over Africa using the IGS-RTS products. The real-time ZTDs are estimated based on the real-time precise point positioning (PPP) solution. GNSS observations from a number of reference stations are processed over a period of 7 days. Then, the estimated real-time ZTDs are compared with the IGS tropospheric products counterparts. The findings indicate that the estimated real-time ZTDs have millimeter level accuracy in comparison with the IGS counterparts.
Strum, David P; May, Jerrold H; Sampson, Allan R; Vargas, Luis G; Spangler, William E
2003-01-01
Variability inherent in the duration of surgical procedures complicates surgical scheduling. Modeling the duration and variability of surgeries might improve time estimates. Accurate time estimates are important operationally to improve utilization, reduce costs, and identify surgeries that might be considered outliers. Surgeries with multiple procedures are difficult to model because they are difficult to segment into homogenous groups and because they are performed less frequently than single-procedure surgeries. The authors studied, retrospectively, 10,740 surgeries each with exactly two CPTs and 46,322 surgical cases with only one CPT from a large teaching hospital to determine if the distribution of dual-procedure surgery times fit more closely a lognormal or a normal model. The authors tested model goodness of fit to their data using Shapiro-Wilk tests, studied factors affecting the variability of time estimates, and examined the impact of coding permutations (ordered combinations) on modeling. The Shapiro-Wilk tests indicated that the lognormal model is statistically superior to the normal model for modeling dual-procedure surgeries. Permutations of component codes did not appear to differ significantly with respect to total procedure time and surgical time. To improve individual models for infrequent dual-procedure surgeries, permutations may be reduced and estimates may be based on the longest component procedure and type of anesthesia. The authors recommend use of the lognormal model for estimating surgical times for surgeries with two component procedures. Their results help legitimize the use of log transforms to normalize surgical procedure times prior to hypothesis testing using linear statistical models. Multiple-procedure surgeries may be modeled using the longest (statistically most important) component procedure and type of anesthesia.
Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model
NING, JING; QIN, JING; SHEN, YU
2014-01-01
SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727
Strategies for Near Real Time Estimates of Precipitable Water Vapor from GPS Ground Receivers
NASA Technical Reports Server (NTRS)
Y., Bar-Sever; Runge, T.; Kroger, P.
1995-01-01
GPS-based estimates of precipitable water vapor (PWV) may be useful in numerical weather models to improve short-term weather predictions. To be effective in numerical weather prediction models, GPS PWV estimates must be produced with sufficient accuracy in near real time. Several estimation strategies for the near real time processing of GPS data are investigated.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Estimation of Time-Varying Pilot Model Parameters
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Sweet, Barbara T.
2011-01-01
Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.
Estimating the age-specific duration of herpes zoster vaccine protection: a matter of model choice?
Bilcke, Joke; Ogunjimi, Benson; Hulstaert, Frank; Van Damme, Pierre; Hens, Niel; Beutels, Philippe
2012-04-05
The estimation of herpes zoster (HZ) vaccine efficacy by time since vaccination and age at vaccination is crucial to assess the effectiveness and cost-effectiveness of HZ vaccination. Published estimates for the duration of protection from the vaccine diverge substantially, although based on data from the same trial for a follow-up period of 5 years. Different models were used to obtain these estimates, but it is unclear which of these models is most appropriate (if any). Only one study estimated vaccine efficacy by age at vaccination and time since vaccination combined. Recently, data became available from the same trial for a follow-up period of 7 years. We aim to elaborate on estimating HZ vaccine efficacy (1) by estimating it as a function of time since vaccination and age at vaccination, (2) by comparing the fits of a range of models, and (3) by fitting these models on data for a follow-up period of 5 and 7 years. Although the models' fit to data are very comparable, they differ substantially in how they estimate vaccine efficacy to change as a function of time since vaccination and age at vaccination. An accurate estimation of HZ vaccine efficacy by time since vaccination and age at vaccination is hampered by the lack of insight in the biological processes underlying HZ vaccine protection, and by the fact that such data are currently not available in sufficient detail. Uncertainty about the choice of model to estimate this important parameter should be acknowledged in cost-effectiveness analyses. Copyright © 2011 Elsevier Ltd. All rights reserved.
Path Flow Estimation Using Time Varying Coefficient State Space Model
NASA Astrophysics Data System (ADS)
Jou, Yow-Jen; Lan, Chien-Lun
2009-08-01
The dynamic path flow information is very crucial in the field of transportation operation and management, i.e., dynamic traffic assignment, scheduling plan, and signal timing. Time-dependent path information, which is important in many aspects, is nearly impossible to be obtained. Consequently, researchers have been seeking estimation methods for deriving valuable path flow information from less expensive traffic data, primarily link traffic counts of surveillance systems. This investigation considers a path flow estimation problem involving the time varying coefficient state space model, Gibbs sampler, and Kalman filter. Numerical examples with part of a real network of the Taipei Mass Rapid Transit with real O-D matrices is demonstrated to address the accuracy of proposed model. Results of this study show that this time-varying coefficient state space model is very effective in the estimation of path flow compared to time-invariant model.
Survival curve estimation with dependent left truncated data using Cox's model.
Mackenzie, Todd
2012-10-19
The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.
Reduced rank models for travel time estimation of low order mode pulses.
Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M
2013-10-01
Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.
Baird, Rachel; Maxwell, Scott E
2016-06-01
Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
2011-01-01
Background Several regression models have been proposed for estimation of isometric joint torque using surface electromyography (SEMG) signals. Common issues related to torque estimation models are degradation of model accuracy with passage of time, electrode displacement, and alteration of limb posture. This work compares the performance of the most commonly used regression models under these circumstances, in order to assist researchers with identifying the most appropriate model for a specific biomedical application. Methods Eleven healthy volunteers participated in this study. A custom-built rig, equipped with a torque sensor, was used to measure isometric torque as each volunteer flexed and extended his wrist. SEMG signals from eight forearm muscles, in addition to wrist joint torque data were gathered during the experiment. Additional data were gathered one hour and twenty-four hours following the completion of the first data gathering session, for the purpose of evaluating the effects of passage of time and electrode displacement on accuracy of models. Acquired SEMG signals were filtered, rectified, normalized and then fed to models for training. Results It was shown that mean adjusted coefficient of determination (Ra2) values decrease between 20%-35% for different models after one hour while altering arm posture decreased mean Ra2 values between 64% to 74% for different models. Conclusions Model estimation accuracy drops significantly with passage of time, electrode displacement, and alteration of limb posture. Therefore model retraining is crucial for preserving estimation accuracy. Data resampling can significantly reduce model training time without losing estimation accuracy. Among the models compared, ordinary least squares linear regression model (OLS) was shown to have high isometric torque estimation accuracy combined with very short training times. PMID:21943179
Rastetter, Edward B; Williams, Mathew; Griffin, Kevin L; Kwiatkowski, Bonnie L; Tomasky, Gabrielle; Potosnak, Mark J; Stoy, Paul C; Shaver, Gaius R; Stieglitz, Marc; Hobbie, John E; Kling, George W
2010-07-01
Continuous time-series estimates of net ecosystem carbon exchange (NEE) are routinely made using eddy covariance techniques. Identifying and compensating for errors in the NEE time series can be automated using a signal processing filter like the ensemble Kalman filter (EnKF). The EnKF compares each measurement in the time series to a model prediction and updates the NEE estimate by weighting the measurement and model prediction relative to a specified measurement error estimate and an estimate of the model-prediction error that is continuously updated based on model predictions of earlier measurements in the time series. Because of the covariance among model variables, the EnKF can also update estimates of variables for which there is no direct measurement. The resulting estimates evolve through time, enabling the EnKF to be used to estimate dynamic variables like changes in leaf phenology. The evolving estimates can also serve as a means to test the embedded model and reconcile persistent deviations between observations and model predictions. We embedded a simple arctic NEE model into the EnKF and filtered data from an eddy covariance tower located in tussock tundra on the northern foothills of the Brooks Range in northern Alaska, USA. The model predicts NEE based only on leaf area, irradiance, and temperature and has been well corroborated for all the major vegetation types in the Low Arctic using chamber-based data. This is the first application of the model to eddy covariance data. We modified the EnKF by adding an adaptive noise estimator that provides a feedback between persistent model data deviations and the noise added to the ensemble of Monte Carlo simulations in the EnKF. We also ran the EnKF with both a specified leaf-area trajectory and with the EnKF sequentially recalibrating leaf-area estimates to compensate for persistent model-data deviations. When used together, adaptive noise estimation and sequential recalibration substantially improved filter performance, but it did not improve performance when used individually. The EnKF estimates of leaf area followed the expected springtime canopy phenology. However, there were also diel fluctuations in the leaf-area estimates; these are a clear indication of a model deficiency possibly related to vapor pressure effects on canopy conductance.
Time to burn: Modeling wildland arson as an autoregressive crime function
Jeffrey P. Prestemon; David T. Butry
2005-01-01
Six Poisson autoregressive models of order p [PAR(p)] of daily wildland arson ignition counts are estimated for five locations in Florida (1994-2001). In addition, a fixed effects time-series Poisson model of annual arson counts is estimated for all Florida counties (1995-2001). PAR(p) model estimates reveal highly significant arson ignition autocorrelation, lasting up...
The Beta-Geometric Model Applied to Fecundability in a Sample of Married Women
NASA Astrophysics Data System (ADS)
Adekanmbi, D. B.; Bamiduro, T. A.
2006-10-01
The time required to achieve pregnancy among married couples termed fecundability has been proposed to follow a beta-geometric distribution. The accuracy of the method used in estimating the parameters of the model has an implication on the goodness of fit of the model. In this study, the parameters of the model are estimated using the Method of Moments and Newton-Raphson estimation procedure. The goodness of fit of the model was considered, using estimates from the two methods of estimation, as well as the asymptotic relative efficiency of the estimates. A noticeable improvement in the fit of the model to the data on time to conception was observed, when the parameters are estimated by Newton-Raphson procedure, and thereby estimating reasonable expectations of fecundability for married female population in the country.
Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.
Kis, Maria
2005-01-01
In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.
Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers
Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.
2004-01-01
LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.
NASA Astrophysics Data System (ADS)
Strano, Salvatore; Terzo, Mario
2018-05-01
The dynamics of the railway vehicles is strongly influenced by the interaction between the wheel and the rail. This kind of contact is affected by several conditioning factors such as vehicle speed, wear, adhesion level and, moreover, it is nonlinear. As a consequence, the modelling and the observation of this kind of phenomenon are complex tasks but, at the same time, they constitute a fundamental step for the estimation of the adhesion level or for the vehicle condition monitoring. This paper presents a novel technique for the real time estimation of the wheel-rail contact forces based on an estimator design model that takes into account the nonlinearities of the interaction by means of a fitting model functional to reproduce the contact mechanics in a wide range of slip and to be easily integrated in a complete model based estimator for railway vehicle.
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V
2007-10-01
The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
Lord, Dominique; Park, Peter Young-Jin
2008-07-01
Traditionally, transportation safety analysts have used the empirical Bayes (EB) method to improve the estimate of the long-term mean of individual sites; to correct for the regression-to-the-mean (RTM) bias in before-after studies; and to identify hotspots or high risk locations. The EB method combines two different sources of information: (1) the expected number of crashes estimated via crash prediction models, and (2) the observed number of crashes at individual sites. Crash prediction models have traditionally been estimated using a negative binomial (NB) (or Poisson-gamma) modeling framework due to the over-dispersion commonly found in crash data. A weight factor is used to assign the relative influence of each source of information on the EB estimate. This factor is estimated using the mean and variance functions of the NB model. With recent trends that illustrated the dispersion parameter to be dependent upon the covariates of NB models, especially for traffic flow-only models, as well as varying as a function of different time-periods, there is a need to determine how these models may affect EB estimates. The objectives of this study are to examine how commonly used functional forms as well as fixed and time-varying dispersion parameters affect the EB estimates. To accomplish the study objectives, several traffic flow-only crash prediction models were estimated using a sample of rural three-legged intersections located in California. Two types of aggregated and time-specific models were produced: (1) the traditional NB model with a fixed dispersion parameter and (2) the generalized NB model (GNB) with a time-varying dispersion parameter, which is also dependent upon the covariates of the model. Several statistical methods were used to compare the fitting performance of the various functional forms. The results of the study show that the selection of the functional form of NB models has an important effect on EB estimates both in terms of estimated values, weight factors, and dispersion parameters. Time-specific models with a varying dispersion parameter provide better statistical performance in terms of goodness-of-fit (GOF) than aggregated multi-year models. Furthermore, the identification of hazardous sites, using the EB method, can be significantly affected when a GNB model with a time-varying dispersion parameter is used. Thus, erroneously selecting a functional form may lead to select the wrong sites for treatment. The study concludes that transportation safety analysts should not automatically use an existing functional form for modeling motor vehicle crashes without conducting rigorous analyses to estimate the most appropriate functional form linking crashes with traffic flow.
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
Traffic evacuation time under nonhomogeneous conditions.
Fazio, Joseph; Shetkar, Rohan; Mathew, Tom V
2017-06-01
During many manmade and natural crises such as terrorist threats, floods, hazardous chemical and gas leaks, emergency personnel need to estimate the time in which people can evacuate from the affected urban area. Knowing an estimated evacuation time for a given crisis, emergency personnel can plan and prepare accordingly with the understanding that the actual evacuation time will take longer. Given the urban area to be evacuated, street widths exiting the area's perimeter, the area's population density, average vehicle occupancy, transport mode share and crawl speed, an estimation of traffic evacuation time can be derived. Peak-hour traffic data collected at three, midblock, Mumbai sites of varying geometric features and traffic composition were used in calibrating a model that estimates peak-hour traffic flow rates. Model validation revealed a correlation coefficient of +0.98 between observed and predicted peak-hour flow rates. A methodology is developed that estimates traffic evacuation time using the model.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Schwab, Joshua; Gruber, Susan; Blaser, Nello; Schomaker, Michael; van der Laan, Mark
2015-01-01
This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the pooled TMLE over an IPW estimator for working marginal structural models for survival, as well as cases in which the pooled TMLE is superior to its stratified counterpart. PMID:25909047
Sun, Yanqing; Sun, Liuquan; Zhou, Jie
2013-07-01
This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.
Factoring out nondecision time in choice reaction time data: Theory and implications.
Verdonck, Stijn; Tuerlinckx, Francis
2016-03-01
Choice reaction time (RT) experiments are an invaluable tool in psychology and neuroscience. A common assumption is that the total choice response time is the sum of a decision and a nondecision part (time spent on perceptual and motor processes). While the decision part is typically modeled very carefully (commonly with diffusion models), a simple and ad hoc distribution (mostly uniform) is assumed for the nondecision component. Nevertheless, it has been shown that the misspecification of the nondecision time can severely distort the decision model parameter estimates. In this article, we propose an alternative approach to the estimation of choice RT models that elegantly bypasses the specification of the nondecision time distribution by means of an unconventional convolution of data and decision model distributions (hence called the D*M approach). Once the decision model parameters have been estimated, it is possible to compute a nonparametric estimate of the nondecision time distribution. The technique is tested on simulated data, and is shown to systematically remove traditional estimation bias related to misspecified nondecision time, even for a relatively small number of observations. The shape of the actual underlying nondecision time distribution can also be recovered. Next, the D*M approach is applied to a selection of existing diffusion model application articles. For all of these studies, substantial quantitative differences with the original analyses are found. For one study, these differences radically alter its final conclusions, underlining the importance of our approach. Additionally, we find that strongly right skewed nondecision time distributions are not at all uncommon. (c) 2016 APA, all rights reserved).
Parameters estimation using the first passage times method in a jump-diffusion model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khaldi, K., E-mail: kkhaldi@umbb.dz; LIMOSE Laboratory, Boumerdes University, 35000; Meddahi, S., E-mail: samia.meddahi@gmail.com
2016-06-02
The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.
Effects of linear trends on estimation of noise in GNSS position time-series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dmitrieva, K.; Segall, P.; Bradley, A. M.
A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this study, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that themore » effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Finally, overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.« less
Effects of linear trends on estimation of noise in GNSS position time-series
NASA Astrophysics Data System (ADS)
Dmitrieva, K.; Segall, P.; Bradley, A. M.
2017-01-01
A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this paper, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that the effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.
Effects of linear trends on estimation of noise in GNSS position time-series
Dmitrieva, K.; Segall, P.; Bradley, A. M.
2016-10-20
A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this study, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that themore » effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Finally, overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.« less
Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886
Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.
Modeling environmental noise exceedances using non-homogeneous Poisson processes.
Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R
2014-10-01
In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C
2012-07-01
The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.
Chakraborty, Arindom
2016-12-01
A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.
Ngwa, Julius S; Cabral, Howard J; Cheng, Debbie M; Pencina, Michael J; Gagnon, David R; LaValley, Michael P; Cupples, L Adrienne
2016-11-03
Typical survival studies follow individuals to an event and measure explanatory variables for that event, sometimes repeatedly over the course of follow up. The Cox regression model has been used widely in the analyses of time to diagnosis or death from disease. The associations between the survival outcome and time dependent measures may be biased unless they are modeled appropriately. In this paper we explore the Time Dependent Cox Regression Model (TDCM), which quantifies the effect of repeated measures of covariates in the analysis of time to event data. This model is commonly used in biomedical research but sometimes does not explicitly adjust for the times at which time dependent explanatory variables are measured. This approach can yield different estimates of association compared to a model that adjusts for these times. In order to address the question of how different these estimates are from a statistical perspective, we compare the TDCM to Pooled Logistic Regression (PLR) and Cross Sectional Pooling (CSP), considering models that adjust and do not adjust for time in PLR and CSP. In a series of simulations we found that time adjusted CSP provided identical results to the TDCM while the PLR showed larger parameter estimates compared to the time adjusted CSP and the TDCM in scenarios with high event rates. We also observed upwardly biased estimates in the unadjusted CSP and unadjusted PLR methods. The time adjusted PLR had a positive bias in the time dependent Age effect with reduced bias when the event rate is low. The PLR methods showed a negative bias in the Sex effect, a subject level covariate, when compared to the other methods. The Cox models yielded reliable estimates for the Sex effect in all scenarios considered. We conclude that survival analyses that explicitly account in the statistical model for the times at which time dependent covariates are measured provide more reliable estimates compared to unadjusted analyses. We present results from the Framingham Heart Study in which lipid measurements and myocardial infarction data events were collected over a period of 26 years.
A time-dependent probabilistic seismic-hazard model for California
Cramer, C.H.; Petersen, M.D.; Cao, T.; Toppozada, Tousson R.; Reichle, M.
2000-01-01
For the purpose of sensitivity testing and illuminating nonconsensus components of time-dependent models, the California Department of Conservation, Division of Mines and Geology (CDMG) has assembled a time-dependent version of its statewide probabilistic seismic hazard (PSH) model for California. The model incorporates available consensus information from within the earth-science community, except for a few faults or fault segments where consensus information is not available. For these latter faults, published information has been incorporated into the model. As in the 1996 CDMG/U.S. Geological Survey (USGS) model, the time-dependent models incorporate three multisegment ruptures: a 1906, an 1857, and a southern San Andreas earthquake. Sensitivity tests are presented to show the effect on hazard and expected damage estimates of (1) intrinsic (aleatory) sigma, (2) multisegment (cascade) vs. independent segment (no cascade) ruptures, and (3) time-dependence vs. time-independence. Results indicate that (1) differences in hazard and expected damage estimates between time-dependent and independent models increase with decreasing intrinsic sigma, (2) differences in hazard and expected damage estimates between full cascading and not cascading are insensitive to intrinsic sigma, (3) differences in hazard increase with increasing return period (decreasing probability of occurrence), and (4) differences in moment-rate budgets increase with decreasing intrinsic sigma and with the degree of cascading, but are within the expected uncertainty in PSH time-dependent modeling and do not always significantly affect hazard and expected damage estimates.
Ye, Yu; Kerr, William C
2011-01-01
To explore various model specifications in estimating relationships between liver cirrhosis mortality rates and per capita alcohol consumption in aggregate-level cross-section time-series data. Using a series of liver cirrhosis mortality rates from 1950 to 2002 for 47 U.S. states, the effects of alcohol consumption were estimated from pooled autoregressive integrated moving average (ARIMA) models and 4 types of panel data models: generalized estimating equation, generalized least square, fixed effect, and multilevel models. Various specifications of error term structure under each type of model were also examined. Different approaches controlling for time trends and for using concurrent or accumulated consumption as predictors were also evaluated. When cirrhosis mortality was predicted by total alcohol, highly consistent estimates were found between ARIMA and panel data analyses, with an average overall effect of 0.07 to 0.09. Less consistent estimates were derived using spirits, beer, and wine consumption as predictors. When multiple geographic time series are combined as panel data, none of existent models could accommodate all sources of heterogeneity such that any type of panel model must employ some form of generalization. Different types of panel data models should thus be estimated to examine the robustness of findings. We also suggest cautious interpretation when beverage-specific volumes are used as predictors. Copyright © 2010 by the Research Society on Alcoholism.
Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario.
Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao
2016-11-22
Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals' average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day's WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas.
Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario
Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao
2016-01-01
Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals’ average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day’s WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas. PMID:27879663
Estimation of stochastic volatility by using Ornstein-Uhlenbeck type models
NASA Astrophysics Data System (ADS)
Mariani, Maria C.; Bhuiyan, Md Al Masum; Tweneboah, Osei K.
2018-02-01
In this study, we develop a technique for estimating the stochastic volatility (SV) of a financial time series by using Ornstein-Uhlenbeck type models. Using the daily closing prices from developed and emergent stock markets, we conclude that the incorporation of stochastic volatility into the time varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. Furthermore, our estimation algorithm is feasible with large data sets and have good convergence properties.
Chan, Kwun Chuen Gary; Wang, Mei-Cheng
2017-01-01
Recurrent event processes with marker measurements are mostly and largely studied with forward time models starting from an initial event. Interestingly, the processes could exhibit important terminal behavior during a time period before occurrence of the failure event. A natural and direct way to study recurrent events prior to a failure event is to align the processes using the failure event as the time origin and to examine the terminal behavior by a backward time model. This paper studies regression models for backward recurrent marker processes by counting time backward from the failure event. A three-level semiparametric regression model is proposed for jointly modeling the time to a failure event, the backward recurrent event process, and the marker observed at the time of each backward recurrent event. The first level is a proportional hazards model for the failure time, the second level is a proportional rate model for the recurrent events occurring before the failure event, and the third level is a proportional mean model for the marker given the occurrence of a recurrent event backward in time. By jointly modeling the three components, estimating equations can be constructed for marked counting processes to estimate the target parameters in the three-level regression models. Large sample properties of the proposed estimators are studied and established. The proposed models and methods are illustrated by a community-based AIDS clinical trial to examine the terminal behavior of frequencies and severities of opportunistic infections among HIV infected individuals in the last six months of life.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
Nonparametric autocovariance estimation from censored time series by Gaussian imputation.
Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K
2009-02-01
One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.
Eberle, Claudia; Ament, Christoph
2012-01-01
Background With continuous glucose sensors (CGSs), it is possible to obtain a dynamical signal of the patient’s subcutaneous glucose concentration in real time. How could that information be exploited? We suggest a model-based diagnosis system with a twofold objective: real-time state estimation and long-term model parameter identification. Methods To obtain a dynamical model, Bergman’s nonlinear minimal model (considering plasma glucose G, insulin I, and interstitial insulin X) is extended by two states describing first and second insulin response. Furthermore, compartments for oral glucose and subcutaneous insulin inputs as well as for subcutaneous glucose measurement are added. The observability of states and external inputs as well as the identifiability of model parameters are assessed using the empirical observability Gramian. Signals are estimated for different nondiabetic and diabetic scenarios by unscented Kalman filter. Results (1) Observability of different state subsets is evaluated, e.g., from CGSs, {G, I} or {G, X} can be observed and the set {G, I, X} cannot. (2) Model parameters are included, e.g., it is possible to estimate the second-phase insulin response gain kG2 additionally. This can be used for model adaptation and as a diagnostic parameter that is almost zero for diabetes patients. (3) External inputs are considered, e.g., oral glucose is theoretically observable for nondiabetic patients, but estimation scenarios show that the time delay of 1 h limits application. Conclusions A real-time estimation of states (such as plasma insulin I) and parameters (such as kG2) is possible, which allows an improved real-time state prediction and a personalized model. PMID:23063042
Identifying demand for health resources using waiting times information.
Blundell, R; Windmeijer, F
2000-09-01
In this paper the differences in average waiting times are utilized to identify the determinants of demand for health services. The equilibrium waiting time framework is used, but the full equilibrium assumption is relaxed by selecting areas with low waiting times and by estimating a (semi-)parametric selection model. Determinants of supply are used as instruments for the endogeneity of waiting times. A model for the demand for acute services at the ward level in the UK is estimated. The model estimates, and their implications for health service allocations in the UK, are contrasted against more standard allocation models. The present results show that it is critically important to account for rationing by waiting times when identifying needs from care utilization data. Copyright 2000 John Wiley & Sons, Ltd.
Aguero-Valverde, Jonathan
2013-01-01
In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.
Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W
2016-11-15
In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Parameter estimation of kinetic models from metabolic profiles: two-phase dynamic decoupling method.
Jia, Gengjie; Stephanopoulos, Gregory N; Gunawan, Rudiyanto
2011-07-15
Time-series measurements of metabolite concentration have become increasingly more common, providing data for building kinetic models of metabolic networks using ordinary differential equations (ODEs). In practice, however, such time-course data are usually incomplete and noisy, and the estimation of kinetic parameters from these data is challenging. Practical limitations due to data and computational aspects, such as solving stiff ODEs and finding global optimal solution to the estimation problem, give motivations to develop a new estimation procedure that can circumvent some of these constraints. In this work, an incremental and iterative parameter estimation method is proposed that combines and iterates between two estimation phases. One phase involves a decoupling method, in which a subset of model parameters that are associated with measured metabolites, are estimated using the minimization of slope errors. Another phase follows, in which the ODE model is solved one equation at a time and the remaining model parameters are obtained by minimizing concentration errors. The performance of this two-phase method was tested on a generic branched metabolic pathway and the glycolytic pathway of Lactococcus lactis. The results showed that the method is efficient in getting accurate parameter estimates, even when some information is missing.
Robust inference in discrete hazard models for randomized clinical trials.
Nguyen, Vinh Q; Gillen, Daniel L
2012-10-01
Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osterman, Gordon; Keating, Kristina; Binley, Andrew
Here, we estimate parameters from the Katz and Thompson permeability model using laboratory complex electrical conductivity (CC) and nuclear magnetic resonance (NMR) data to build permeability models parameterized with geophysical measurements. We use the Katz and Thompson model based on the characteristic hydraulic length scale, determined from mercury injection capillary pressure estimates of pore throat size, and the intrinsic formation factor, determined from multisalinity conductivity measurements, for this purpose. Two new permeability models are tested, one based on CC data and another that incorporates CC and NMR data. From measurements made on forty-five sandstone cores collected from fifteen different formations,more » we evaluate how well the CC relaxation time and the NMR transverse relaxation times compare to the characteristic hydraulic length scale and how well the formation factor estimated from CC parameters compares to the intrinsic formation factor. We find: (1) the NMR transverse relaxation time models the characteristic hydraulic length scale more accurately than the CC relaxation time (R 2 of 0.69 and 0.33 and normalized root mean square errors (NRMSE) of 0.16 and 0.21, respectively); (2) the CC estimated formation factor is well correlated with the intrinsic formation factor (NRMSE50.23). We demonstrate that that permeability estimates from the joint-NMR-CC model (NRMSE50.13) compare favorably to estimates from the Katz and Thompson model (NRMSE50.074). Lastly, this model advances the capability of the Katz and Thompson model by employing parameters measureable in the field giving it the potential to more accurately estimate permeability using geophysical measurements than are currently possible.« less
Osterman, Gordon; Keating, Kristina; Binley, Andrew; ...
2016-03-18
Here, we estimate parameters from the Katz and Thompson permeability model using laboratory complex electrical conductivity (CC) and nuclear magnetic resonance (NMR) data to build permeability models parameterized with geophysical measurements. We use the Katz and Thompson model based on the characteristic hydraulic length scale, determined from mercury injection capillary pressure estimates of pore throat size, and the intrinsic formation factor, determined from multisalinity conductivity measurements, for this purpose. Two new permeability models are tested, one based on CC data and another that incorporates CC and NMR data. From measurements made on forty-five sandstone cores collected from fifteen different formations,more » we evaluate how well the CC relaxation time and the NMR transverse relaxation times compare to the characteristic hydraulic length scale and how well the formation factor estimated from CC parameters compares to the intrinsic formation factor. We find: (1) the NMR transverse relaxation time models the characteristic hydraulic length scale more accurately than the CC relaxation time (R 2 of 0.69 and 0.33 and normalized root mean square errors (NRMSE) of 0.16 and 0.21, respectively); (2) the CC estimated formation factor is well correlated with the intrinsic formation factor (NRMSE50.23). We demonstrate that that permeability estimates from the joint-NMR-CC model (NRMSE50.13) compare favorably to estimates from the Katz and Thompson model (NRMSE50.074). Lastly, this model advances the capability of the Katz and Thompson model by employing parameters measureable in the field giving it the potential to more accurately estimate permeability using geophysical measurements than are currently possible.« less
Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn
2018-05-01
Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.
NASA Technical Reports Server (NTRS)
Curry, Timothy J.; Batterson, James G. (Technical Monitor)
2000-01-01
Low order equivalent system (LOES) models for the Tu-144 supersonic transport aircraft were identified from flight test data. The mathematical models were given in terms of transfer functions with a time delay by the military standard MIL-STD-1797A, "Flying Qualities of Piloted Aircraft," and the handling qualities were predicted from the estimated transfer function coefficients. The coefficients and the time delay in the transfer functions were estimated using a nonlinear equation error formulation in the frequency domain. Flight test data from pitch, roll, and yaw frequency sweeps at various flight conditions were used for parameter estimation. Flight test results are presented in terms of the estimated parameter values, their standard errors, and output fits in the time domain. Data from doublet maneuvers at the same flight conditions were used to assess the predictive capabilities of the identified models. The identified transfer function models fit the measured data well and demonstrated good prediction capabilities. The Tu-144 was predicted to be between level 2 and 3 for all longitudinal maneuvers and level I for all lateral maneuvers. High estimates of the equivalent time delay in the transfer function model caused the poor longitudinal rating.
NASA Astrophysics Data System (ADS)
Koshimura, S.; Hino, R.; Ohta, Y.; Kobayashi, H.; Musa, A.; Murashima, Y.
2014-12-01
With use of modern computing power and advanced sensor networks, a project is underway to establish a new system of real-time tsunami inundation forecasting, damage estimation and mapping to enhance society's resilience in the aftermath of major tsunami disaster. The system consists of fusion of real-time crustal deformation monitoring/fault model estimation by Ohta et al. (2012), high-performance real-time tsunami propagation/inundation modeling with NEC's vector supercomputer SX-ACE, damage/loss estimation models (Koshimura et al., 2013), and geo-informatics. After a major (near field) earthquake is triggered, the first response of the system is to identify the tsunami source model by applying RAPiD Algorithm (Ohta et al., 2012) to observed RTK-GPS time series at GEONET sites in Japan. As performed in the data obtained during the 2011 Tohoku event, we assume less than 10 minutes as the acquisition time of the source model. Given the tsunami source, the system moves on to running tsunami propagation and inundation model which was optimized on the vector supercomputer SX-ACE to acquire the estimation of time series of tsunami at offshore/coastal tide gauges to determine tsunami travel and arrival time, extent of inundation zone, maximum flow depth distribution. The implemented tsunami numerical model is based on the non-linear shallow-water equations discretized by finite difference method. The merged bathymetry and topography grids are prepared with 10 m resolution to better estimate the tsunami inland penetration. Given the maximum flow depth distribution, the system performs GIS analysis to determine the numbers of exposed population and structures using census data, then estimates the numbers of potential death and damaged structures by applying tsunami fragility curve (Koshimura et al., 2013). Since the tsunami source model is determined, the model is supposed to complete the estimation within 10 minutes. The results are disseminated as mapping products to responders and stakeholders, e.g. national and regional municipalities, to be utilized for their emergency/response activities. In 2014, the system is verified through the case studies of 2011 Tohoku event and potential earthquake scenarios along Nankai Trough with regard to its capability and robustness.
Improved Regional Seismic Event Locations Using 3-D Velocity Models
1999-12-15
regional velocity model to estimate event hypocenters. Travel times for the regional phases are calculated using a sophisticated eikonal finite...can greatly improve estimates of event locations. Our algorithm calculates travel times using a finite difference approximation of the eikonal ...such as IASP91 or J-B. 3-D velocity models require more sophisticated travel time modeling routines; thus, we use a 3-D eikonal equation solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
Additive mixed effect model for recurrent gap time data.
Ding, Jieli; Sun, Liuquan
2017-04-01
Gap times between recurrent events are often of primary interest in medical and observational studies. The additive hazards model, focusing on risk differences rather than risk ratios, has been widely used in practice. However, the marginal additive hazards model does not take the dependence among gap times into account. In this paper, we propose an additive mixed effect model to analyze gap time data, and the proposed model includes a subject-specific random effect to account for the dependence among the gap times. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. In addition, some graphical and numerical procedures are presented for model checking. The finite sample behavior of the proposed methods is evaluated through simulation studies, and an application to a data set from a clinic study on chronic granulomatous disease is provided.
ARMA models for earthquake ground motions. Seismic safety margins research program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.
1981-02-01
Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less
A Modularized Efficient Framework for Non-Markov Time Series Estimation
NASA Astrophysics Data System (ADS)
Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.
2018-06-01
We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.
Novel mathematical model to estimate ball impact force in soccer.
Iga, Takahito; Nunome, Hiroyuki; Sano, Shinya; Sato, Nahoko; Ikegami, Yasuo
2017-11-22
To assess ball impact force during soccer kicking is important to quantify from both performance and chronic injury prevention perspectives. We aimed to verify the appropriateness of previous models used to estimate ball impact force and to propose an improved model to better capture the time history of ball impact force. A soccer ball was fired directly onto a force platform (10 kHz) at five realistic kicking ball velocities and ball behaviour was captured by a high-speed camera (5,000 Hz). The time history of ball impact force was estimated using three existing models and two new models. A new mathematical model that took into account a rapid change in ball surface area and heterogeneous ball deformation showed a distinctive advantage to estimate the peak forces and its occurrence times and to reproduce time history of ball impact forces more precisely, thereby reinforcing the possible mechanics of 'footballer's ankle'. Ball impact time was also systematically shortened when ball velocity increases in contrast to practical understanding for producing faster ball velocity, however, the aspect of ball contact time must be considered carefully from practical point of view.
The estimation of time-varying risks in asset pricing modelling using B-Spline method
NASA Astrophysics Data System (ADS)
Nurjannah; Solimun; Rinaldo, Adji
2017-12-01
Asset pricing modelling has been extensively studied in the past few decades to explore the risk-return relationship. The asset pricing literature typically assumed a static risk-return relationship. However, several studies found few anomalies in the asset pricing modelling which captured the presence of the risk instability. The dynamic model is proposed to offer a better model. The main problem highlighted in the dynamic model literature is that the set of conditioning information is unobservable and therefore some assumptions have to be made. Hence, the estimation requires additional assumptions about the dynamics of risk. To overcome this problem, the nonparametric estimators can also be used as an alternative for estimating risk. The flexibility of the nonparametric setting avoids the problem of misspecification derived from selecting a functional form. This paper investigates the estimation of time-varying asset pricing model using B-Spline, as one of nonparametric approach. The advantages of spline method is its computational speed and simplicity, as well as the clarity of controlling curvature directly. The three popular asset pricing models will be investigated namely CAPM (Capital Asset Pricing Model), Fama-French 3-factors model and Carhart 4-factors model. The results suggest that the estimated risks are time-varying and not stable overtime which confirms the risk instability anomaly. The results is more pronounced in Carhart’s 4-factors model.
Phadnis, Milind A; Wetmore, James B; Shireman, Theresa I; Ellerbeck, Edward F; Mahnken, Jonathan D
2016-01-01
Time-dependent covariates can be modeled within the Cox regression framework and can allow both proportional and nonproportional hazards for the risk factor of research interest. However, in many areas of health services research, interest centers on being able to estimate residual longevity after the occurrence of a particular event such as stroke. The survival trajectory of patients experiencing a stroke can be potentially influenced by stroke type (hemorrhagic or ischemic), time of the stroke (relative to time zero), time since the stroke occurred, or a combination of these factors. In such situations, researchers are more interested in estimating lifetime lost due to stroke rather than merely estimating the relative hazard due to stroke. To achieve this, we propose an ensemble approach using the generalized gamma distribution by means of a semi-Markov type model with an additive hazards extension. Our modeling framework allows stroke as a time-dependent covariate to affect all three parameters (location, scale, and shape) of the generalized gamma distribution. Using the concept of relative times, we answer the research question by estimating residual life lost due to ischemic and hemorrhagic stroke in the chronic dialysis population. PMID:26403934
Phadnis, Milind A; Wetmore, James B; Shireman, Theresa I; Ellerbeck, Edward F; Mahnken, Jonathan D
2017-12-01
Time-dependent covariates can be modeled within the Cox regression framework and can allow both proportional and nonproportional hazards for the risk factor of research interest. However, in many areas of health services research, interest centers on being able to estimate residual longevity after the occurrence of a particular event such as stroke. The survival trajectory of patients experiencing a stroke can be potentially influenced by stroke type (hemorrhagic or ischemic), time of the stroke (relative to time zero), time since the stroke occurred, or a combination of these factors. In such situations, researchers are more interested in estimating lifetime lost due to stroke rather than merely estimating the relative hazard due to stroke. To achieve this, we propose an ensemble approach using the generalized gamma distribution by means of a semi-Markov type model with an additive hazards extension. Our modeling framework allows stroke as a time-dependent covariate to affect all three parameters (location, scale, and shape) of the generalized gamma distribution. Using the concept of relative times, we answer the research question by estimating residual life lost due to ischemic and hemorrhagic stroke in the chronic dialysis population.
NASA Astrophysics Data System (ADS)
Hajicek, Joshua J.; Selesnick, Ivan W.; Henin, Simon; Talmadge, Carrick L.; Long, Glenis R.
2018-05-01
Stimulus frequency otoacoustic emissions (SFOAEs) were evoked and estimated using swept-frequency tones with and without the use of swept suppressor tones. SFOAEs were estimated using a least-squares fitting procedure. The estimated SFOAEs for the two paradigms (with- and without-suppression) were similar in amplitude and phase. The fitting procedure minimizes the square error between a parametric model of total ear-canal pressure (with unknown amplitudes and phases) and ear-canal pressure acquired during each paradigm. Modifying the parametric model to allow SFOAE amplitude and phase to vary over time revealed additional amplitude and phase fine structure in the without-suppressor, but not the with-suppressor paradigm. The use of a time-varying parametric model to estimate SFOAEs without-suppression may provide additional information about cochlear mechanics not available when using a with-suppressor paradigm.
Marion, OUIDIR; Lise, GIORGIS-ALLEMAND; Sarah, LYON-CAEN; Xavier, MORELLI; Claire, CRACOWSKI; Sabrina, PONTET; Isabelle, PIN; Johanna, LEPEULE; Valérie, SIROUX; Rémy, SLAMA
2016-01-01
Studies of air pollution effects during pregnancy generally only consider exposure in the outdoor air at the home address. We aimed to compare exposure models differing in their ability to account for the spatial resolution of pollutants, space-time activity and indoor air pollution levels. We recruited 40 pregnant women in the Grenoble urban area, France, who carried a Global Positioning System (GPS) during up to 3 weeks; in a subgroup, indoor measurements of fine particles (PM2.5) were conducted at home (n=9) and personal exposure to nitrogen dioxide (NO2) was assessed using passive air samplers (n=10). Outdoor concentrations of NO2, and PM2.5 were estimated from a dispersion model with a fine spatial resolution. Women spent on average 16 h per day at home. Considering only outdoor levels, for estimates at the home address, the correlation between the estimate using the nearest background air monitoring station and the estimate from the dispersion model was high (r=0.93) for PM2.5 and moderate (r=0.67) for NO2. The model incorporating clean GPS data was less correlated with the estimate relying on raw GPS data (r=0.77) than the model ignoring space-time activity (r=0.93). PM2.5 outdoor levels were not to moderately correlated with estimates from the model incorporating indoor measurements and space-time activity (r=−0.10 to 0.47), while NO2 personal levels were not correlated with outdoor levels (r=−0.42 to 0.03). In this urban area, accounting for space-time activity little influenced exposure estimates; in a subgroup of subjects (n=9), incorporating indoor pollution levels seemed to strongly modify them. PMID:26300245
NASA Astrophysics Data System (ADS)
Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe
2018-01-01
In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.
Maximum likelihood estimation for semiparametric transformation models with interval-censored data
Mao, Lu; Lin, D. Y.
2016-01-01
Abstract Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Wallin, Ragnar; Boyle, Richard D.
2013-01-01
The vestibulo-ocular reflex (VOR) is a well-known dual mode bifurcating system that consists of slow and fast modes associated with nystagmus and saccade, respectively. Estimation of continuous-time parameters of nystagmus and saccade models are known to be sensitive to estimation methodology, noise and sampling rate. The stable and accurate estimation of these parameters are critical for accurate disease modelling, clinical diagnosis, robotic control strategies, mission planning for space exploration and pilot safety, etc. This paper presents a novel indirect system identification method for the estimation of continuous-time parameters of VOR employing standardised least-squares with dual sampling rates in a sparse structure. This approach permits the stable and simultaneous estimation of both nystagmus and saccade data. The efficacy of this approach is demonstrated via simulation of a continuous-time model of VOR with typical parameters found in clinical studies and in the presence of output additive noise.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Road Network State Estimation Using Random Forest Ensemble Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Yi; Edara, Praveen; Chang, Yohan
Network-scale travel time prediction not only enables traffic management centers (TMC) to proactively implement traffic management strategies, but also allows travelers make informed decisions about route choices between various origins and destinations. In this paper, a random forest estimator was proposed to predict travel time in a network. The estimator was trained using two years of historical travel time data for a case study network in St. Louis, Missouri. Both temporal and spatial effects were considered in the modeling process. The random forest models predicted travel times accurately during both congested and uncongested traffic conditions. The computational times for themore » models were low, thus useful for real-time traffic management and traveler information applications.« less
Kalman filter to update forest cover estimates
Raymond L. Czaplewski
1990-01-01
The Kalman filter is a statistical estimator that combines a time-series of independent estimates, using a prediction model that describes expected changes in the state of a system over time. An expensive inventory can be updated using model predictions that are adjusted with more recent, but less expensive and precise, monitoring data. The concepts of the Kalman...
An Illustration of Generalised Arma (garma) Time Series Modeling of Forest Area in Malaysia
NASA Astrophysics Data System (ADS)
Pillai, Thulasyammal Ramiah; Shitan, Mahendran
Forestry is the art and science of managing forests, tree plantations, and related natural resources. The main goal of forestry is to create and implement systems that allow forests to continue a sustainable provision of environmental supplies and services. Forest area is land under natural or planted stands of trees, whether productive or not. Forest area of Malaysia has been observed over the years and it can be modeled using time series models. A new class of GARMA models have been introduced in the time series literature to reveal some hidden features in time series data. For these models to be used widely in practice, we illustrate the fitting of GARMA (1, 1; 1, δ) model to the Annual Forest Area data of Malaysia which has been observed from 1987 to 2008. The estimation of the model was done using Hannan-Rissanen Algorithm, Whittle's Estimation and Maximum Likelihood Estimation.
NASA Astrophysics Data System (ADS)
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.
An improved approximate-Bayesian model-choice method for estimating shared evolutionary history
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
Knights, Jonathan; Rohatagi, Shashank
2015-12-01
Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.
MIMICKING COUNTERFACTUAL OUTCOMES TO ESTIMATE CAUSAL EFFECTS.
Lok, Judith J
2017-04-01
In observational studies, treatment may be adapted to covariates at several times without a fixed protocol, in continuous time. Treatment influences covariates, which influence treatment, which influences covariates, and so on. Then even time-dependent Cox-models cannot be used to estimate the net treatment effect. Structural nested models have been applied in this setting. Structural nested models are based on counterfactuals: the outcome a person would have had had treatment been withheld after a certain time. Previous work on continuous-time structural nested models assumes that counterfactuals depend deterministically on observed data, while conjecturing that this assumption can be relaxed. This article proves that one can mimic counterfactuals by constructing random variables, solutions to a differential equation, that have the same distribution as the counterfactuals, even given past observed data. These "mimicking" variables can be used to estimate the parameters of structural nested models without assuming the treatment effect to be deterministic.
Fast maximum likelihood estimation using continuous-time neural point process models.
Lepage, Kyle Q; MacDonald, Christopher J
2015-06-01
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
Estimators for longitudinal latent exposure models: examining measurement model assumptions.
Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D
2017-06-15
Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Bias and robustness of uncertainty components estimates in transient climate projections
NASA Astrophysics Data System (ADS)
Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal
2016-04-01
A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias is always positive. It can be especially high with STANOVA. In the most critical configurations, when the number of members available for each modeling chain is small (< 3) and when internal variability explains most of total uncertainty variance (75% or more), the overestimation is higher than 100% of the true model uncertainty variance. The bias can be considerably reduced with a time series ANOVA approach, owing to the multiple time steps accounted for. The longer the transient time period used for the analysis, the larger the reduction. When a quasi-ergodic ANOVA approach is applied to decadal data for the whole 1980-2100 period, the bias is reduced by a factor 2.5 to 20 depending on the projection lead time. In all cases, the bias is likely to be not negligible for a large number of climate impact studies resulting in a likely large overestimation of the contribution of model uncertainty to total variance. For both approaches, the robustness of all uncertainty estimates is higher when more members are available, when internal variability is smaller and/or the response-to-uncertainty ratio is higher. QEANOVA estimates are much more robust than STANOVA ones: QEANOVA simulated confidence intervals are roughly 3 to 5 times smaller than STANOVA ones. Excepted for STANOVA when less than 3 members is available, the robustness is rather high for total uncertainty and moderate for internal variability estimates. For model uncertainty or response-to-uncertainty ratio estimates, the robustness is conversely low for QEANOVA to very low for STANOVA. In the most critical configurations (small number of member, large internal variability), large over- or underestimation of uncertainty components is very thus likely. To propose relevant uncertainty analyses and avoid misleading interpretations, estimates of uncertainty components should be therefore bias corrected and ideally come with estimates of their robustness. This work is part of the COMPLEX Project (European Collaborative Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/). Hingray, B., Saïd, M., 2014. Partitioning internal variability and model uncertainty components in a multimodel multireplicate ensemble of climate projections. J.Climate. doi:10.1175/JCLI-D-13-00629.1 Hingray, B., Blanchet, J. (revision) Unbiased estimators for uncertainty components in transient climate projections. J. Climate Hingray, B., Blanchet, J., Vidal, J.P. (revision) Robustness of uncertainty components estimates in climate projections. J.Climate
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure...
Park, Jihoon; Yoon, Chungsik; Lee, Kiyoung
2018-05-30
In the field of exposure science, various exposure assessment models have been developed to complement experimental measurements; however, few studies have been published on their validity. This study compares the estimated inhaled aerosol doses of several inhalation exposure models to experimental measurements of aerosols released from consumer spray products, and then compares deposited doses within different parts of the human respiratory tract according to deposition models. Exposure models, including the European Center for Ecotoxicology of Chemicals Targeted Risk Assessment (ECETOC TRA), the Consumer Exposure Model (CEM), SprayExpo, ConsExpo Web and ConsExpo Nano, were used to estimate the inhaled dose under various exposure scenarios, and modeled and experimental estimates were compared. The deposited dose in different respiratory regions was estimated using the International Commission on Radiological Protection model and multiple-path particle dosimetry models under the assumption of polydispersed particles. The modeled estimates of the inhaled doses were accurate in the short term, i.e., within 10 min of the initial spraying, with a differences from experimental estimates ranging from 0 to 73% among the models. However, the estimates for long-term exposure, i.e., exposure times of several hours, deviated significantly from the experimental estimates in the absence of ventilation. The differences between the experimental and modeled estimates of particle number and surface area were constant over time under ventilated conditions. ConsExpo Nano, as a nano-scale model, showed stable estimates of short-term exposure, with a difference from the experimental estimates of less than 60% for all metrics. The deposited particle estimates were similar among the deposition models, particularly in the nanoparticle range for the head airway and alveolar regions. In conclusion, the results showed that the inhalation exposure models tested in this study are suitable for estimating short-term aerosol exposure (within half an hour), but not for estimating long-term exposure. Copyright © 2018 Elsevier GmbH. All rights reserved.
Aircraft Fault Detection Using Real-Time Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2016-01-01
A real-time method for estimating time-varying aircraft frequency responses from input and output measurements was demonstrated. The Bat-4 subscale airplane was used with NASA Langley Research Center's AirSTAR unmanned aerial flight test facility to conduct flight tests and collect data for dynamic modeling. Orthogonal phase-optimized multisine inputs, summed with pilot stick and pedal inputs, were used to excite the responses. The aircraft was tested in its normal configuration and with emulated failures, which included a stuck left ruddervator and an increased command path latency. No prior knowledge of a dynamic model was used or available for the estimation. The longitudinal short period dynamics were investigated in this work. Time-varying frequency responses and stability margins were tracked well using a 20 second sliding window of data, as compared to a post-flight analysis using output error parameter estimation and a low-order equivalent system model. This method could be used in a real-time fault detection system, or for other applications of dynamic modeling such as real-time verification of stability margins during envelope expansion tests.
A double hit model for the distribution of time to AIDS onset
NASA Astrophysics Data System (ADS)
Chillale, Nagaraja Rao
2013-09-01
Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
NASA Astrophysics Data System (ADS)
Luu, Gia Thien; Boualem, Abdelbassit; Duy, Tran Trung; Ravier, Philippe; Butteli, Olivier
Muscle Fiber Conduction Velocity (MFCV) can be calculated from the time delay between the surface electromyographic (sEMG) signals recorded by electrodes aligned with the fiber direction. In order to take into account the non-stationarity during the dynamic contraction (the most daily life situation) of the data, the developed methods have to consider that the MFCV changes over time, which induces time-varying delays and the data is non-stationary (change of Power Spectral Density (PSD)). In this paper, the problem of TVD estimation is considered using a parametric method. First, the polynomial model of TVD has been proposed. Then, the TVD model parameters are estimated by using a maximum likelihood estimation (MLE) strategy solved by a deterministic optimization technique (Newton) and stochastic optimization technique, called simulated annealing (SA). The performance of the two techniques is also compared. We also derive two appropriate Cramer-Rao Lower Bounds (CRLB) for the estimated TVD model parameters and for the TVD waveforms. Monte-Carlo simulation results show that the estimation of both the model parameters and the TVD function is unbiased and that the variance obtained is close to the derived CRBs. A comparison with non-parametric approaches of the TVD estimation is also presented and shows the superiority of the method proposed.
Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application
NASA Astrophysics Data System (ADS)
Chen, Jinduan; Boccelli, Dominic L.
2018-02-01
Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.
Genetic parameters for racing records in trotters using linear and generalized linear models.
Suontama, M; van der Werf, J H J; Juga, J; Ojala, M
2012-09-01
Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.
Parameter Estimates in Differential Equation Models for Chemical Kinetics
ERIC Educational Resources Information Center
Winkel, Brian
2011-01-01
We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…
Real-Time Parameter Estimation Using Output Error
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
Moyer, Douglas; Hirsch, Robert M.; Hyer, Kenneth
2012-01-01
Nutrient and sediment fluxes and changes in fluxes over time are key indicators that water resource managers can use to assess the progress being made in improving the structure and function of the Chesapeake Bay ecosystem. The U.S. Geological Survey collects annual nutrient (nitrogen and phosphorus) and sediment flux data and computes trends that describe the extent to which water-quality conditions are changing within the major Chesapeake Bay tributaries. Two regression-based approaches were compared for estimating annual nutrient and sediment fluxes and for characterizing how these annual fluxes are changing over time. The two regression models compared are the traditionally used ESTIMATOR and the newly developed Weighted Regression on Time, Discharge, and Season (WRTDS). The model comparison focused on answering three questions: (1) What are the differences between the functional form and construction of each model? (2) Which model produces estimates of flux with the greatest accuracy and least amount of bias? (3) How different would the historical estimates of annual flux be if WRTDS had been used instead of ESTIMATOR? One additional point of comparison between the two models is how each model determines trends in annual flux once the year-to-year variations in discharge have been determined. All comparisons were made using total nitrogen, nitrate, total phosphorus, orthophosphorus, and suspended-sediment concentration data collected at the nine U.S. Geological Survey River Input Monitoring stations located on the Susquehanna, Potomac, James, Rappahannock, Appomattox, Pamunkey, Mattaponi, Patuxent, and Choptank Rivers in the Chesapeake Bay watershed. Two model characteristics that uniquely distinguish ESTIMATOR and WRTDS are the fundamental model form and the determination of model coefficients. ESTIMATOR and WRTDS both predict water-quality constituent concentration by developing a linear relation between the natural logarithm of observed constituent concentration and three explanatory variables—the natural log of discharge, time, and season. ESTIMATOR uses two additional explanatory variables—the square of the log of discharge and time-squared. Both models determine coefficients for variables for a series of estimation windows. ESTIMATOR establishes variable coefficients for a series of 9-year moving windows; all observed constituent concentration data within the 9-year window are used to establish each coefficient. Conversely, WRTDS establishes variable coefficients for each combination of discharge and time using only observed concentration data that are similar in time, season, and discharge to the day being estimated. As a result of these distinguishing characteristics, ESTIMATOR reproduces concentration-discharge relations that are closely approximated by a quadratic or linear function with respect to both the log of discharge and time. Conversely, the linear model form of WRTDS coupled with extensive model windowing for each combination of discharge and time allows WRTDS to reproduce observed concentration-discharge relations that are more sinuous in form. Another distinction between ESTIMATOR and WRTDS is the reporting of uncertainty associated with the model estimates of flux and trend. ESTIMATOR quantifies the standard error of prediction associated with the determination of flux and trends. The standard error of prediction enables the determination of the 95-percent confidence intervals for flux and trend as well as the ability to test whether the reported trend is significantly different from zero (where zero equals no trend). Conversely, WRTDS is unable to propagate error through the many (over 5,000) models for unique combinations of flow and time to determine a total standard error. As a result, WRTDS flux estimates are not reported with confidence intervals and a level of significance is not determined for flow-normalized fluxes. The differences between ESTIMATOR and WRTDS, with regard to model form and determination of model coefficients, have an influence on the determination of nutrient and sediment fluxes and associated changes in flux over time as a result of management activities. The comparison between the model estimates of flux and trend was made for combinations of five water-quality constituents at nine River Input Monitoring stations. The major findings with regard to nutrient and sediment fluxes are as follows: (1)WRTDS produced estimates of flux for all combinations that were more accurate, based on reduction in root mean squared error, than flux estimates from ESTIMATOR; (2) for 67 percent of the combinations, WRTDS and ESTIMATOR both produced estimates of flux that were minimally biased compared to observed fluxes(flux bias = tendency to over or underpredict flux observations); however, for 33 percent of the combinations, WRTDS produced estimates of flux that were considerably less biased (by at least 10 percent) than flux estimates from ESTIMATOR; (3) the average percent difference in annual fluxes generated by ESTIMATOR and WRTDS was less than 10 percent at 80 percent of the combinations; and (4) the greatest differences related to flux bias and annual fluxes all occurred for combinations where the pattern in observed concentration-discharge relation was sinuous (two points of inflection) rather than linear or quadratic (zero or one point of inflection). The major findings with regard to trends are as follows: (1) both models produce water-quality trends that have factored in the year-to-year variations in flow; (2) trends in water-quality condition are represented by ESTIMATOR as a trend in flow-adjusted concentration and by WRTDS as a flow normalized flux; (3) for 67 percent of the combinations with trend estimates, the WRTDS trends in flow-normalized flux are in the same direction and magnitude to the ESTIMATOR trends in flow-adjusted concentration, and at the remaining 33 percent the differences in trend magnitude and direction are related to fundamental differences between concentration and flux; and (4) the majority (85 percent) of the total nitrogen, nitrate, and orthophosphorus combinations exhibited long-term (1985 to 2010) trends in WRTDS flow-normalized flux that indicate improvement or reduction in associated flux and the majority (83 percent) of the total phosphorus (from 1985 to 2010) and suspended sediment (from 2001 to 2010) combinations exhibited trends in WRTDS flow-normalized flux that indicate degradation or increases in the flux delivered.
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Breen, Michael S.; Long, Thomas C.; Schultz, Bradley D.; Crooks, James; Breen, Miyuki; Langstaff, John E.; Isaacs, Kristin K.; Tan, Yu-Mei; Williams, Ronald W.; Cao, Ye; Geller, Andrew M.; Devlin, Robert B.; Batterman, Stuart A.; Buckley, Timothy J.
2014-01-01
A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure to do so can add uncertainty and bias to risk estimates. In this study, a classification model, called MicroTrac, was developed to estimate time of day and duration spent in eight ME (indoors and outdoors at home, work, school; inside vehicles; other locations) from global positioning system (GPS) data and geocoded building boundaries. Based on a panel study, MicroTrac estimates were compared with 24-h diary data from nine participants, with corresponding GPS data and building boundaries of home, school, and work. MicroTrac correctly classified the ME for 99.5% of the daily time spent by the participants. The capability of MicroTrac could help to reduce the time–location uncertainty in air pollution exposure models and exposure metrics for individuals in health studies. PMID:24619294
Price responsiveness of demand for cigarettes: does rationality matter?
Laporte, Audrey
2006-01-01
Meta-analysis is applied to aggregate-level studies that model the demand for cigarettes using static, myopic, or rational addiction frameworks in an attempt to synthesize key findings in the literature and to identify determinants of the variation in reported price elasticity estimates across studies. The results suggest that the rational addiction framework produces statistically similar estimates to the static framework but that studies that use the myopic framework tend to report more elastic price effects. Studies that applied panel data techniques or controlled for cross-border smuggling reported more elastic price elasticity estimates, whereas the use of instrumental variable techniques and time trends or time dummy variables produced less elastic estimates. The finding that myopic models produce different estimates than either of the other two model frameworks underscores that careful attention must be given to time series properties of the data.
A distributed, dynamic, parallel computational model: the role of noise in velocity storage
Merfeld, Daniel M.
2012-01-01
Networks of neurons perform complex calculations using distributed, parallel computation, including dynamic “real-time” calculations required for motion control. The brain must combine sensory signals to estimate the motion of body parts using imperfect information from noisy neurons. Models and experiments suggest that the brain sometimes optimally minimizes the influence of noise, although it remains unclear when and precisely how neurons perform such optimal computations. To investigate, we created a model of velocity storage based on a relatively new technique–“particle filtering”–that is both distributed and parallel. It extends existing observer and Kalman filter models of vestibular processing by simulating the observer model many times in parallel with noise added. During simulation, the variance of the particles defining the estimator state is used to compute the particle filter gain. We applied our model to estimate one-dimensional angular velocity during yaw rotation, which yielded estimates for the velocity storage time constant, afferent noise, and perceptual noise that matched experimental data. We also found that the velocity storage time constant was Bayesian optimal by comparing the estimate of our particle filter with the estimate of the Kalman filter, which is optimal. The particle filter demonstrated a reduced velocity storage time constant when afferent noise increased, which mimics what is known about aminoglycoside ablation of semicircular canal hair cells. This model helps bridge the gap between parallel distributed neural computation and systems-level behavioral responses like the vestibuloocular response and perception. PMID:22514288
Information Flow in an Atmospheric Model and Data Assimilation
ERIC Educational Resources Information Center
Yoon, Young-noh
2011-01-01
Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background…
FracFit: A Robust Parameter Estimation Tool for Anomalous Transport Problems
NASA Astrophysics Data System (ADS)
Kelly, J. F.; Bolster, D.; Meerschaert, M. M.; Drummond, J. D.; Packman, A. I.
2016-12-01
Anomalous transport cannot be adequately described with classical Fickian advection-dispersion equations (ADE). Rather, fractional calculus models may be used, which capture non-Fickian behavior (e.g. skewness and power-law tails). FracFit is a robust parameter estimation tool based on space- and time-fractional models used to model anomalous transport. Currently, four fractional models are supported: 1) space fractional advection-dispersion equation (sFADE), 2) time-fractional dispersion equation with drift (TFDE), 3) fractional mobile-immobile equation (FMIE), and 4) tempered fractional mobile-immobile equation (TFMIE); additional models may be added in the future. Model solutions using pulse initial conditions and continuous injections are evaluated using stable distribution PDFs and CDFs or subordination integrals. Parameter estimates are extracted from measured breakthrough curves (BTCs) using a weighted nonlinear least squares (WNLS) algorithm. Optimal weights for BTCs for pulse initial conditions and continuous injections are presented, facilitating the estimation of power-law tails. Two sample applications are analyzed: 1) continuous injection laboratory experiments using natural organic matter and 2) pulse injection BTCs in the Selke river. Model parameters are compared across models and goodness-of-fit metrics are presented, assisting model evaluation. The sFADE and time-fractional models are compared using space-time duality (Baeumer et. al., 2009), which links the two paradigms.
Hierarchical model analysis of the Atlantic Flyway Breeding Waterfowl Survey
Sauer, John R.; Zimmerman, Guthrie S.; Klimstra, Jon D.; Link, William A.
2014-01-01
We used log-linear hierarchical models to analyze data from the Atlantic Flyway Breeding Waterfowl Survey. The survey has been conducted by state biologists each year since 1989 in the northeastern United States from Virginia north to New Hampshire and Vermont. Although yearly population estimates from the survey are used by the United States Fish and Wildlife Service for estimating regional waterfowl population status for mallards (Anas platyrhynchos), black ducks (Anas rubripes), wood ducks (Aix sponsa), and Canada geese (Branta canadensis), they are not routinely adjusted to control for time of day effects and other survey design issues. The hierarchical model analysis permits estimation of year effects and population change while accommodating the repeated sampling of plots and controlling for time of day effects in counting. We compared population estimates from the current stratified random sample analysis to population estimates from hierarchical models with alternative model structures that describe year to year changes as random year effects, a trend with random year effects, or year effects modeled as 1-year differences. Patterns of population change from the hierarchical model results generally were similar to the patterns described by stratified random sample estimates, but significant visibility differences occurred between twilight to midday counts in all species. Controlling for the effects of time of day resulted in larger population estimates for all species in the hierarchical model analysis relative to the stratified random sample analysis. The hierarchical models also provided a convenient means of estimating population trend as derived statistics from the analysis. We detected significant declines in mallard and American black ducks and significant increases in wood ducks and Canada geese, a trend that had not been significant for 3 of these 4 species in the prior analysis. We recommend using hierarchical models for analysis of the Atlantic Flyway Breeding Waterfowl Survey.
Estimation of Bid Curves in Power Exchanges using Time-varying Simultaneous-Equations Models
NASA Astrophysics Data System (ADS)
Ofuji, Kenta; Yamaguchi, Nobuyuki
Simultaneous-equations model (SEM) is generally used in economics to estimate interdependent endogenous variables such as price and quantity in a competitive, equilibrium market. In this paper, we have attempted to apply SEM to JEPX (Japan Electric Power eXchange) spot market, a single-price auction market, using the publicly available data of selling and buying bid volumes, system price and traded quantity. The aim of this analysis is to understand the magnitude of influences to the auctioned prices and quantity from the selling and buying bids, than to forecast prices and quantity for risk management purposes. In comparison with the Ordinary Least Squares (OLS) estimation where the estimation results represent average values that are independent of time, we employ a time-varying simultaneous-equations model (TV-SEM) to capture structural changes inherent in those influences, using State Space models with Kalman filter stepwise estimation. The results showed that the buying bid volumes has that highest magnitude of influences among the factors considered, exhibiting time-dependent changes, ranging as broad as about 240% of its average. The slope of the supply curve also varies across time, implying the elastic property of the supply commodity, while the demand curve remains comparatively inelastic and stable over time.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models.
Gelfand, Lois A; MacKinnon, David P; DeRubeis, Robert J; Baraldi, Amanda N
2016-01-01
Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome-underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.
New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Lung, Shun-Fat
2017-01-01
A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.
Magari, Robert T
2002-03-01
The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002
Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J
2017-12-01
Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.
Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2013-01-01
A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimating Dynamical Systems: Derivative Estimation Hints From Sir Ronald A. Fisher.
Deboeck, Pascal R
2010-08-06
The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common approach for estimating derivatives, Local Linear Approximation (LLA), produces estimates with correlated errors. Depending on the specific differential equation model used, such correlated errors can lead to severely biased estimates of differential equation model parameters. This article shows that the fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed method and a generalized form of LLA when used to estimate derivatives and when used to estimate differential equation model parameters. A third application estimates the frequency of oscillation in observations of the monthly deaths from bronchitis, emphysema, and asthma in the United Kingdom. These data are publicly available in the statistical program R, and functions in R for the method presented are provided.
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Kho Chia; Kane, Ibrahim Lawal; Rahman, Haliza Abd
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parametermore » estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.« less
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
NASA Astrophysics Data System (ADS)
Chen, Kho Chia; Bahar, Arifah; Kane, Ibrahim Lawal; Ting, Chee-Ming; Rahman, Haliza Abd
2015-02-01
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.
Efficient multidimensional regularization for Volterra series estimation
NASA Astrophysics Data System (ADS)
Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan
2018-05-01
This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
A comparative simulation study of AR(1) estimators in short time series.
Krone, Tanja; Albers, Casper J; Timmerman, Marieke E
2017-01-01
Various estimators of the autoregressive model exist. We compare their performance in estimating the autocorrelation in short time series. In Study 1, under correct model specification, we compare the frequentist r 1 estimator, C-statistic, ordinary least squares estimator (OLS) and maximum likelihood estimator (MLE), and a Bayesian method, considering flat (B f ) and symmetrized reference (B sr ) priors. In a completely crossed experimental design we vary lengths of time series (i.e., T = 10, 25, 40, 50 and 100) and autocorrelation (from -0.90 to 0.90 with steps of 0.10). The results show a lowest bias for the B sr , and a lowest variability for r 1 . The power in different conditions is highest for B sr and OLS. For T = 10, the absolute performance of all measurements is poor, as expected. In Study 2, we study robustness of the methods through misspecification by generating the data according to an ARMA(1,1) model, but still analysing the data with an AR(1) model. We use the two methods with the lowest bias for this study, i.e., B sr and MLE. The bias gets larger when the non-modelled moving average parameter becomes larger. Both the variability and power show dependency on the non-modelled parameter. The differences between the two estimation methods are negligible for all measurements.
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
Assimilation of thermospheric measurements for ionosphere-thermosphere state estimation
NASA Astrophysics Data System (ADS)
Miladinovich, Daniel S.; Datta-Barua, Seebany; Bust, Gary S.; Makela, Jonathan J.
2016-12-01
We develop a method that uses data assimilation to estimate ionospheric-thermospheric (IT) states during midlatitude nighttime storm conditions. The algorithm Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE) uses time-varying electron densities in the F region, derived primarily from total electron content data, to estimate two drivers of the IT: neutral winds and electric potential. A Kalman filter is used to update background models based on ingested plasma densities and neutral wind measurements. This is the first time a Kalman filtering technique is used with the EMPIRE algorithm and the first time neutral wind measurements from 630.0 nm Fabry-Perot interferometers (FPIs) are ingested to improve estimates of storm time ion drifts and neutral winds. The effects of assimilating remotely sensed neutral winds from FPI observations are studied by comparing results of ingesting: electron densities (N) only, N plus half the measurements from a single FPI, and then N plus all of the FPI data. While estimates of ion drifts and neutral winds based on N give estimates similar to the background models, this study's results show that ingestion of the FPI data can significantly change neutral wind and ion drift estimation away from background models. In particular, once neutral winds are ingested, estimated neutral winds agree more with validation wind data, and estimated ion drifts in the magnetic field-parallel direction are more sensitive to ingestion than the field-perpendicular zonal and meridional directions. Also, data assimilation with FPI measurements helps provide insight into the effects of contamination on 630.0 nm emissions experienced during geomagnetic storms.
Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I
2018-01-01
Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2008-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2010-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2004-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merion M.
2002-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
Software for Estimating Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Hines, Merlon M.
2003-01-01
A high-level parametric mathematical model for estimating the costs of testing rocket engines and components at Stennis Space Center has been implemented as a Microsoft Excel program that generates multiple spreadsheets. The model and the program are both denoted, simply, the Cost Estimating Model (CEM). The inputs to the CEM are the parameters that describe particular tests, including test types (component or engine test), numbers and duration of tests, thrust levels, and other parameters. The CEM estimates anticipated total project costs for a specific test. Estimates are broken down into testing categories based on a work-breakdown structure and a cost-element structure. A notable historical assumption incorporated into the CEM is that total labor times depend mainly on thrust levels. As a result of a recent modification of the CEM to increase the accuracy of predicted labor times, the dependence of labor time on thrust level is now embodied in third- and fourth-order polynomials.
MIXOR: a computer program for mixed-effects ordinal regression analysis.
Hedeker, D; Gibbons, R D
1996-03-01
MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.
Estimating the effect of a rare time-dependent treatment on the recurrent event rate.
Smith, Abigail R; Zhu, Danting; Goodrich, Nathan P; Merion, Robert M; Schaubel, Douglas E
2018-05-30
In many observational studies, the objective is to estimate the effect of treatment or state-change on the recurrent event rate. If treatment is assigned after the start of follow-up, traditional methods (eg, adjustment for baseline-only covariates or fully conditional adjustment for time-dependent covariates) may give biased results. We propose a two-stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time-dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post-LT End Stage Renal Disease (ESRD) on rate of days hospitalized. Copyright © 2018 John Wiley & Sons, Ltd.
Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.
Dai, Tianjiao; Shete, Sanjay
2016-08-30
In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.
APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS
NASA Astrophysics Data System (ADS)
Mehran, Babak; Nakamura, Hideki
Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.
An approach to checking case-crossover analyses based on equivalence with time-series methods.
Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L
2008-03-01
The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.
Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice
Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.
2010-01-01
In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777
Analysis of longitudinal marginal structural models.
Bryan, Jenny; Yu, Zhuo; Van Der Laan, Mark J
2004-07-01
In this article we construct and study estimators of the causal effect of a time-dependent treatment on survival in longitudinal studies. We employ a particular marginal structural model (MSM), proposed by Robins (2000), and follow a general methodology for constructing estimating functions in censored data models. The inverse probability of treatment weighted (IPTW) estimator of Robins et al. (2000) is used as an initial estimator and forms the basis for an improved, one-step estimator that is consistent and asymptotically linear when the treatment mechanism is consistently estimated. We extend these methods to handle informative censoring. The proposed methodology is employed to estimate the causal effect of exercise on mortality in a longitudinal study of seniors in Sonoma County. A simulation study demonstrates the bias of naive estimators in the presence of time-dependent confounders and also shows the efficiency gain of the IPTW estimator, even in the absence such confounding. The efficiency gain of the improved, one-step estimator is demonstrated through simulation.
Regression analysis of sparse asynchronous longitudinal data.
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P
2015-09-01
We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.
Estimating the time for dissolution of spent fuel exposed to unlimited water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leider, H.R.; Nguyen, S.N.; Stout, R.B.
1991-12-01
The release of radionuclides from spent fuel cannot be precisely predicted at this point because a satisfactory dissolution model based on specific chemical processes is not yet available. However, preliminary results on the dissolution rate of UO{sub 2} and spent fuel as a function of temperature and water composition have recently been reported. This information, together with data on fragment size distribution of spent fuel, are used to estimate the dissolution response of spent fuel in excess flowing water within the framework of a simple model. In this model, the reaction/dissolution front advances linearly with time and geometry is preserved.more » This also estimates the dissolution rate of the bulk of the fission products and higher actinides, which are uniformly distributed in the UO{sub 2} matrix and are presumed to dissolve congruently. We have used a fuel fragment distribution actually observed to calculate the time for total dissolution of spent fuel. A worst-case estimate was also made using the initial (maximum) rate of dissolution to predict the total dissolution time. The time for total dissolution of centimeter size particles is estimated to be 5.5 {times} 10{sup 4} years at 25{degrees}C.« less
Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models
NASA Technical Reports Server (NTRS)
Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.
2011-01-01
Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.
iGLASS: An Improvement to the GLASS Method for Estimating Species Trees from Gene Trees
Rosenberg, Noah A.
2012-01-01
Abstract Several methods have been designed to infer species trees from gene trees while taking into account gene tree/species tree discordance. Although some of these methods provide consistent species tree topology estimates under a standard model, most either do not estimate branch lengths or are computationally slow. An exception, the GLASS method of Mossel and Roch, is consistent for the species tree topology, estimates branch lengths, and is computationally fast. However, GLASS systematically overestimates divergence times, leading to biased estimates of species tree branch lengths. By assuming a multispecies coalescent model in which multiple lineages are sampled from each of two taxa at L independent loci, we derive the distribution of the waiting time until the first interspecific coalescence occurs between the two taxa, considering all loci and measuring from the divergence time. We then use the mean of this distribution to derive a correction to the GLASS estimator of pairwise divergence times. We show that our improved estimator, which we call iGLASS, consistently estimates the divergence time between a pair of taxa as the number of loci approaches infinity, and that it is an unbiased estimator of divergence times when one lineage is sampled per taxon. We also show that many commonly used clustering methods can be combined with the iGLASS estimator of pairwise divergence times to produce a consistent estimator of the species tree topology. Through simulations, we show that iGLASS can greatly reduce the bias and mean squared error in obtaining estimates of divergence times in a species tree. PMID:22216756
Estimating the biophysical properties of neurons with intracellular calcium dynamics.
Ye, Jingxin; Rozdeba, Paul J; Morone, Uriel I; Daou, Arij; Abarbanel, Henry D I
2014-06-01
We investigate the dynamics of a conductance-based neuron model coupled to a model of intracellular calcium uptake and release by the endoplasmic reticulum. The intracellular calcium dynamics occur on a time scale that is orders of magnitude slower than voltage spiking behavior. Coupling these mechanisms sets the stage for the appearance of chaotic dynamics, which we observe within certain ranges of model parameter values. We then explore the question of whether one can, using observed voltage data alone, estimate the states and parameters of the voltage plus calcium (V+Ca) dynamics model. We find the answer is negative. Indeed, we show that voltage plus another observed quantity must be known to allow the estimation to be accurate. We show that observing both the voltage time course V(t) and the intracellular Ca time course will permit accurate estimation, and from the estimated model state, accurate prediction after observations are completed. This sets the stage for how one will be able to use a more detailed model of V+Ca dynamics in neuron activity in the analysis of experimental data on individual neurons as well as functional networks in which the nodes (neurons) have these biophysical properties.
Estimating the biophysical properties of neurons with intracellular calcium dynamics
NASA Astrophysics Data System (ADS)
Ye, Jingxin; Rozdeba, Paul J.; Morone, Uriel I.; Daou, Arij; Abarbanel, Henry D. I.
2014-06-01
We investigate the dynamics of a conductance-based neuron model coupled to a model of intracellular calcium uptake and release by the endoplasmic reticulum. The intracellular calcium dynamics occur on a time scale that is orders of magnitude slower than voltage spiking behavior. Coupling these mechanisms sets the stage for the appearance of chaotic dynamics, which we observe within certain ranges of model parameter values. We then explore the question of whether one can, using observed voltage data alone, estimate the states and parameters of the voltage plus calcium (V+Ca) dynamics model. We find the answer is negative. Indeed, we show that voltage plus another observed quantity must be known to allow the estimation to be accurate. We show that observing both the voltage time course V (t) and the intracellular Ca time course will permit accurate estimation, and from the estimated model state, accurate prediction after observations are completed. This sets the stage for how one will be able to use a more detailed model of V+Ca dynamics in neuron activity in the analysis of experimental data on individual neurons as well as functional networks in which the nodes (neurons) have these biophysical properties.
Dorazio, Robert; Karanth, K. Ullas
2017-01-01
MotivationSeveral spatial capture-recapture (SCR) models have been developed to estimate animal abundance by analyzing the detections of individuals in a spatial array of traps. Most of these models do not use the actual dates and times of detection, even though this information is readily available when using continuous-time recorders, such as microphones or motion-activated cameras. Instead most SCR models either partition the period of trap operation into a set of subjectively chosen discrete intervals and ignore multiple detections of the same individual within each interval, or they simply use the frequency of detections during the period of trap operation and ignore the observed times of detection. Both practices make inefficient use of potentially important information in the data.Model and data analysisWe developed a hierarchical SCR model to estimate the spatial distribution and abundance of animals detected with continuous-time recorders. Our model includes two kinds of point processes: a spatial process to specify the distribution of latent activity centers of individuals within the region of sampling and a temporal process to specify temporal patterns in the detections of individuals. We illustrated this SCR model by analyzing spatial and temporal patterns evident in the camera-trap detections of tigers living in and around the Nagarahole Tiger Reserve in India. We also conducted a simulation study to examine the performance of our model when analyzing data sets of greater complexity than the tiger data.BenefitsOur approach provides three important benefits: First, it exploits all of the information in SCR data obtained using continuous-time recorders. Second, it is sufficiently versatile to allow the effects of both space use and behavior of animals to be specified as functions of covariates that vary over space and time. Third, it allows both the spatial distribution and abundance of individuals to be estimated, effectively providing a species distribution model, even in cases where spatial covariates of abundance are unknown or unavailable. We illustrated these benefits in the analysis of our data, which allowed us to quantify differences between nocturnal and diurnal activities of tigers and to estimate their spatial distribution and abundance across the study area. Our continuous-time SCR model allows an analyst to specify many of the ecological processes thought to be involved in the distribution, movement, and behavior of animals detected in a spatial trapping array of continuous-time recorders. We plan to extend this model to estimate the population dynamics of animals detected during multiple years of SCR surveys.
NASA Astrophysics Data System (ADS)
Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.
2014-01-01
Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A
2018-04-15
For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.
Kalman filter estimation of human pilot-model parameters
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.
1975-01-01
The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.
Estimating procedure times for surgeries by determining location parameters for the lognormal model.
Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H
2004-05-01
We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
Seasonal Variability in Global Eddy Diffusion and the Effect on Thermospheric Neutral Density
NASA Astrophysics Data System (ADS)
Pilinski, M.; Crowley, G.
2014-12-01
We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time between January 2004 and January 2008 were estimated from residuals of neutral density measurements made by the CHallenging Minisatellite Payload (CHAMP) and simulations made using the Thermosphere Ionosphere Mesosphere Electrodynamics - Global Circulation Model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy-diffusivity models. The eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the RMS difference between the TIME-GCM model and density data from a variety of satellites is reduced by an average of 5%. This result, indicates that global thermospheric density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates how eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are some limitations of this method, which are discussed, including that the latitude-dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion consistent with diffusion observations made by other techniques.
Seasonal variability in global eddy diffusion and the effect on neutral density
NASA Astrophysics Data System (ADS)
Pilinski, M. D.; Crowley, G.
2015-04-01
We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time were estimated from residuals of neutral density measurements made by the Challenging Minisatellite Payload (CHAMP) and simulations made using the thermosphere-ionosphere-mesosphere electrodynamics global circulation model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy diffusivity models. Eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the root-mean-square sum for the TIME-GCM model is reduced by an average of 5% when compared to density data from a variety of satellites, indicating that the fidelity of global density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates that eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are limitations to this method, which are discussed, including that the latitude dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion which is also consistent with diffusion observations made by other techniques.
Langbein, John O.
2012-01-01
Recent studies have documented that global positioning system (GPS) time series of position estimates have temporal correlations which have been modeled as a combination of power-law and white noise processes. When estimating quantities such as a constant rate from GPS time series data, the estimated uncertainties on these quantities are more realistic when using a noise model that includes temporal correlations than simply assuming temporally uncorrelated noise. However, the choice of the specific representation of correlated noise can affect the estimate of uncertainty. For many GPS time series, the background noise can be represented by either: (1) a sum of flicker and random-walk noise or, (2) as a power-law noise model that represents an average of the flicker and random-walk noise. For instance, if the underlying noise model is a combination of flicker and random-walk noise, then incorrectly choosing the power-law model could underestimate the rate uncertainty by a factor of two. Distinguishing between the two alternate noise models is difficult since the flicker component can dominate the assessment of the noise properties because it is spread over a significant portion of the measurable frequency band. But, although not necessarily detectable, the random-walk component can be a major constituent of the estimated rate uncertainty. None the less, it is possible to determine the upper bound on the random-walk noise.
Spatio-temporal population estimates for risk management
NASA Astrophysics Data System (ADS)
Cockings, Samantha; Martin, David; Smith, Alan; Martin, Rebecca
2013-04-01
Accurate estimation of population at risk from hazards and effective emergency management of events require not just appropriate spatio-temporal modelling of hazards but also of population. While much recent effort has been focused on improving the modelling and predictions of hazards (both natural and anthropogenic), there has been little parallel advance in the measurement or modelling of population statistics. Different hazard types occur over diverse temporal cycles, are of varying duration and differ significantly in their spatial extent. Even events of the same hazard type, such as flood events, vary markedly in their spatial and temporal characteristics. Conceptually and pragmatically then, population estimates should also be available for similarly varying spatio-temporal scales. Routine population statistics derived from traditional censuses or surveys are usually static representations in both space and time, recording people at their place of usual residence on census/survey night and presenting data for administratively defined areas. Such representations effectively fix the scale of population estimates in both space and time, which is unhelpful for meaningful risk management. Over recent years, the Pop24/7 programme of research, based at the University of Southampton (UK), has developed a framework for spatio-temporal modelling of population, based on gridded population surfaces. Based on a data model which is fully flexible in terms of space and time, the framework allows population estimates to be produced for any time slice relevant to the data contained in the model. It is based around a set of origin and destination centroids, which have capacities, spatial extents and catchment areas, all of which can vary temporally, such as by time of day, day of week, season. A background layer, containing information on features such as transport networks and landuse, provides information on the likelihood of people being in certain places at specific times. Unusual patterns associated with special events can also be modelled and the framework is fully volume preserving. Outputs from the model are gridded population surfaces for the specified time slice, either for total population or by sub-groups (e.g. age). Software to implement the models (SurfaceBuilder247) has been developed and pre-processed layers for typical time slices for England and Wales in 2001 and 2006 are available for UK academic purposes. The outputs and modelling framework from the Pop24/7 programme provide significant opportunities for risk management applications. For estimates of mid- to long-term cumulative population exposure to hazards, such as in flood risk mapping, populations can be produced for numerous time slices and integrated with flood models. For applications in emergency response/ management, time-specific population models can be used as seeds for agent-based models or other response/behaviour models. Estimates for sub-groups of the population also permit exploration of vulnerability through space and time. This paper outlines the requirements for effective spatio-temporal population models for risk management. It then describes the Pop24/7 framework and illustrates its potential for risk management through presentation of examples from natural and anthropogenic hazard applications. The paper concludes by highlighting key challenges for future research in this area.
ERIC Educational Resources Information Center
Reardon, Sean F.; Brennan, Robert T.; Buka, Stephen L.
2002-01-01
Developed procedures for constructing a retrospective person-period data set from cross-sectional data and discusses modeling strategies for estimating multilevel discrete-time event history models. Applied the methods to the analysis of cigarette use by 1,979 urban adolescents. Results show the influence of the racial composition of the…
NASA Astrophysics Data System (ADS)
Umehara, Hiroaki; Okada, Masato; Naruse, Yasushi
2018-03-01
The estimation of angular time series data is a widespread issue relating to various situations involving rotational motion and moving objects. There are two kinds of problem settings: the estimation of wrapped angles, which are principal values in a circular coordinate system (e.g., the direction of an object), and the estimation of unwrapped angles in an unbounded coordinate system such as for the positioning and tracking of moving objects measured by the signal-wave phase. Wrapped angles have been estimated in previous studies by sequential Bayesian filtering; however, the hyperparameters that are to be solved and that control the properties of the estimation model were given a priori. The present study establishes a procedure of hyperparameter estimation from the observation data of angles only, using the framework of Bayesian inference completely as the maximum likelihood estimation. Moreover, the filter model is modified to estimate the unwrapped angles. It is proved that without noise our model reduces to the existing algorithm of Itoh's unwrapping transform. It is numerically confirmed that our model is an extension of unwrapping estimation from Itoh's unwrapping transform to the case with noise.
NASA Astrophysics Data System (ADS)
Briseño, Jessica; Herrera, Graciela S.
2010-05-01
Herrera (1998) proposed a method for the optimal design of groundwater quality monitoring networks that involves space and time in a combined form. The method was applied later by Herrera et al (2001) and by Herrera and Pinder (2005). To get the estimates of the contaminant concentration being analyzed, this method uses a space-time ensemble Kalman filter, based on a stochastic flow and transport model. When the method is applied, it is important that the characteristics of the stochastic model be congruent with field data, but, in general, it is laborious to manually achieve a good match between them. For this reason, the main objective of this work is to extend the space-time ensemble Kalman filter proposed by Herrera, to estimate the hydraulic conductivity, together with hydraulic head and contaminant concentration, and its application in a synthetic example. The method has three steps: 1) Given the mean and the semivariogram of the natural logarithm of hydraulic conductivity (ln K), random realizations of this parameter are obtained through two alternatives: Gaussian simulation (SGSim) and Latin Hypercube Sampling method (LHC). 2) The stochastic model is used to produce hydraulic head (h) and contaminant (C) realizations, for each one of the conductivity realizations. With these realization the mean of ln K, h and C are obtained, for h and C, the mean is calculated in space and time, and also the cross covariance matrix h-ln K-C in space and time. The covariance matrix is obtained averaging products of the ln K, h and C realizations on the estimation points and times, and the positions and times with data of the analyzed variables. The estimation points are the positions at which estimates of ln K, h or C are gathered. In an analogous way, the estimation times are those at which estimates of any of the three variables are gathered. 3) Finally the ln K, h and C estimate are obtained using the space-time ensemble Kalman filter. The realization mean for each one of the variables is used as the prior space-time estimate for the Kalman filter, and the space-time cross-covariance matrix of h-ln K-C as the prior estimate-error covariance-matrix. The synthetic example has a modeling area of 700 x 700 square meters; a triangular mesh model with 702 nodes and 1306 elements is used. A pumping well located in the central part of the study area is considered. For the contaminant transport model, a contaminant source area is present in the western part of the study area. The estimation points for hydraulic conductivity, hydraulic head and contaminant concentrations are located on a submesh of the model mesh (same location for h, ln K and c), composed by 48 nodes spread throughout the study area, with an approximately separation of 90 meters between nodes. The results analysis was done through the mean error, root mean square error, initial and final estimation maps of h, ln K and C at each time, and the initial and final variance maps of h, ln K and C. To obtain model convergence, 3000 realizations of ln K were required using SGSim, and only 1000 with LHC. The results show that for both alternatives, the Kalman filter estimates for h, ln K and C using h and C data, have errors which magnitudes decrease as data is added. HERRERA, G. S.(1998), Cost Effective Groundwater Quality Sampling Network Design. Ph. D. thesis, University of Vermont, Burlington, Vermont, 172 pp. HERRERA G., GUARNACCIA J., PINDER G. Y SIMUTA R.(2001),"Diseño de redes de monitoreo de la calidad del agua subterránea eficientes", Proceedings of the 2001 International Symposium on Environmental Hydraulics, Arizona, U.S.A. HERRERA G. S. and PINDER G.F. (2005), Space-time optimization of groundwater quality sampling networks Water Resour. Res., Vol. 41, No. 12, W12407, 10.1029/2004WR003626.
An Integrated Model of Patient and Staff Satisfaction Using Queuing Theory
Mousavi, Ali; Clarkson, P. John; Young, Terry
2015-01-01
This paper investigates the connection between patient satisfaction, waiting time, staff satisfaction, and service time. It uses a variety of models to enable improvement against experiential and operational health service goals. Patient satisfaction levels are estimated using a model based on waiting (waiting times). Staff satisfaction levels are estimated using a model based on the time spent with patients (service time). An integrated model of patient and staff satisfaction, the effective satisfaction level model, is then proposed (using queuing theory). This links patient satisfaction, waiting time, staff satisfaction, and service time, connecting two important concepts, namely, experience and efficiency in care delivery and leading to a more holistic approach in designing and managing health services. The proposed model will enable healthcare systems analysts to objectively and directly relate elements of service quality to capacity planning. Moreover, as an instrument used jointly by healthcare commissioners and providers, it affords the prospect of better resource allocation. PMID:27170899
An Integrated Model of Patient and Staff Satisfaction Using Queuing Theory.
Komashie, Alexander; Mousavi, Ali; Clarkson, P John; Young, Terry
2015-01-01
This paper investigates the connection between patient satisfaction, waiting time, staff satisfaction, and service time. It uses a variety of models to enable improvement against experiential and operational health service goals. Patient satisfaction levels are estimated using a model based on waiting (waiting times). Staff satisfaction levels are estimated using a model based on the time spent with patients (service time). An integrated model of patient and staff satisfaction, the effective satisfaction level model, is then proposed (using queuing theory). This links patient satisfaction, waiting time, staff satisfaction, and service time, connecting two important concepts, namely, experience and efficiency in care delivery and leading to a more holistic approach in designing and managing health services. The proposed model will enable healthcare systems analysts to objectively and directly relate elements of service quality to capacity planning. Moreover, as an instrument used jointly by healthcare commissioners and providers, it affords the prospect of better resource allocation.
Building a kinetic Monte Carlo model with a chosen accuracy.
Bhute, Vijesh J; Chatterjee, Abhijit
2013-06-28
The kinetic Monte Carlo (KMC) method is a popular modeling approach for reaching large materials length and time scales. The KMC dynamics is erroneous when atomic processes that are relevant to the dynamics are missing from the KMC model. Recently, we had developed for the first time an error measure for KMC in Bhute and Chatterjee [J. Chem. Phys. 138, 084103 (2013)]. The error measure, which is given in terms of the probability that a missing process will be selected in the correct dynamics, requires estimation of the missing rate. In this work, we present an improved procedure for estimating the missing rate. The estimate found using the new procedure is within an order of magnitude of the correct missing rate, unlike our previous approach where the estimate was larger by orders of magnitude. This enables one to find the error in the KMC model more accurately. In addition, we find the time for which the KMC model can be used before a maximum error in the dynamics has been reached.
NASA Astrophysics Data System (ADS)
Brown, Robert Douglas
Several components of a system for quantitative application of climatic statistics to landscape planning and design (CLIMACS) have been developed. One component model (MICROSIM) estimated the microclimate at the top of a remote crop using physically-based models and inputs of weather station data. Temperatures at the top of unstressed, uniform crops on flat terrain within 1600 m of a recording weather station were estimated within 1.0 C 96% of the time for a corn crop and 92% of the time for a soybean crop. Crop top winds were estimated within 0.4 m/s 92% of the time for corn and 100% of the time for soybean. This is of sufficient accuracy for application in landscape planning and design models. A physically-based model (COMFA) was developed for the determination of outdoor human thermal comfort from microclimate inputs. Estimated versus measured comfort levels in a wide range of environments agreed with a correlation coefficient of r = 0.91. Using these components, the CLIMACS concept has been applied to a typical planning example. Microclimate data were generated from weather station information using MICROSIM, then input to COMFA and to a house energy consumption model called HOTCAN to derive quantitative climatic justification for design decisions.
Shakiba, Maryam; Mansournia, Mohammad Ali; Salari, Arsalan; Soori, Hamid; Mansournia, Nasrin; Kaufman, Jay S
2018-06-01
In longitudinal studies, standard analysis may yield biased estimates of exposure effect in the presence of time-varying confounders that are also intermediate variables. We aimed to quantify the relationship between obesity and coronary heart disease (CHD) by appropriately adjusting for time-varying confounders. This study was performed in a subset of participants from the Atherosclerosis Risk in Communities (ARIC) Study (1987-2010), a US study designed to investigate risk factors for atherosclerosis. General obesity was defined as body mass index (weight (kg)/height (m)2) ≥30, and abdominal obesity (AOB) was defined according to either waist circumference (≥102 cm in men and ≥88 cm in women) or waist:hip ratio (≥0.9 in men and ≥0.85 in women). The association of obesity with CHD was estimated by G-estimation and compared with results from accelerated failure-time models using 3 specifications. The first model, which adjusted for baseline covariates, excluding metabolic mediators of obesity, showed increased risk of CHD for all obesity measures. Further adjustment for metabolic mediators in the second model and time-varying variables in the third model produced negligible changes in the hazard ratios. The hazard ratios estimated by G-estimation were 1.15 (95% confidence interval (CI): 0.83, 1.47) for general obesity, 1.65 (95% CI: 1.35, 1.92) for AOB based on waist circumference, and 1.38 (95% CI: 1.13, 1.99) for AOB based on waist:hip ratio, suggesting that AOB increased the risk of CHD. The G-estimated hazard ratios for both measures were further from the null than those derived from standard models.
Ergon, T.; Yoccoz, N.G.; Nichols, J.D.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
In many species, age or time of maturation and survival costs of reproduction may vary substantially within and among populations. We present a capture-mark-recapture model to estimate the latent individual trait distribution of time of maturation (or other irreversible transitions) as well as survival differences associated with the two states (representing costs of reproduction). Maturation can take place at any point in continuous time, and mortality hazard rates for each reproductive state may vary according to continuous functions over time. Although we explicitly model individual heterogeneity in age/time of maturation, we make the simplifying assumption that death hazard rates do not vary among individuals within groups of animals. However, the estimates of the maturation distribution are fairly robust against individual heterogeneity in survival as long as there is no individual level correlation between mortality hazards and latent time of maturation. We apply the model to biweekly capture?recapture data of overwintering field voles (Microtus agrestis) in cyclically fluctuating populations to estimate time of maturation and survival costs of reproduction. Results show that onset of seasonal reproduction is particularly late and survival costs of reproduction are particularly large in declining populations.
Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Feng, E-mail: fwang@unu.edu; Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft; Huisman, Jaco
2013-11-15
Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lackmore » of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e-waste estimation studies.« less
Real-time monitoring of a microbial electrolysis cell using an electrical equivalent circuit model.
Hussain, S A; Perrier, M; Tartakovsky, B
2018-04-01
Efforts in developing microbial electrolysis cells (MECs) resulted in several novel approaches for wastewater treatment and bioelectrosynthesis. Practical implementation of these approaches necessitates the development of an adequate system for real-time (on-line) monitoring and diagnostics of MEC performance. This study describes a simple MEC equivalent electrical circuit (EEC) model and a parameter estimation procedure, which enable such real-time monitoring. The proposed approach involves MEC voltage and current measurements during its operation with periodic power supply connection/disconnection (on/off operation) followed by parameter estimation using either numerical or analytical solution of the model. The proposed monitoring approach is demonstrated using a membraneless MEC with flow-through porous electrodes. Laboratory tests showed that changes in the influent carbon source concentration and composition significantly affect MEC total internal resistance and capacitance estimated by the model. Fast response of these EEC model parameters to changes in operating conditions enables the development of a model-based approach for real-time monitoring and fault detection.
Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models
Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.
2016-01-01
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906
Time Delay Embedding Increases Estimation Precision of Models of Intraindividual Variability
ERIC Educational Resources Information Center
von Oertzen, Timo; Boker, Steven M.
2010-01-01
This paper investigates the precision of parameters estimated from local samples of time dependent functions. We find that "time delay embedding," i.e., structuring data prior to analysis by constructing a data matrix of overlapping samples, increases the precision of parameter estimates and in turn statistical power compared to standard…
Verification of the exponential model of body temperature decrease after death in pigs.
Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal
2005-09-01
The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.
Cumulus cloud model estimates of trace gas transports
NASA Technical Reports Server (NTRS)
Garstang, Michael; Scala, John; Simpson, Joanne; Tao, Wei-Kuo; Thompson, A.; Pickering, K. E.; Harris, R.
1989-01-01
Draft structures in convective clouds are examined with reference to the results of the NASA Amazon Boundary Layer Experiments (ABLE IIa and IIb) and calculations based on a multidimensional time dependent dynamic and microphysical numerical cloud model. It is shown that some aspects of the draft structures can be calculated from measurements of the cloud environment. Estimated residence times in the lower regions of the cloud based on surface observations (divergence and vertical velocities) are within the same order of magnitude (about 20 min) as model trajectory estimates.
NASA Astrophysics Data System (ADS)
Amini, Changeez; Taherpour, Abbas; Khattab, Tamer; Gazor, Saeed
2017-01-01
This paper presents an improved propagation channel model for the visible light in indoor environments. We employ this model to derive an enhanced positioning algorithm using on the relation between the time-of-arrivals (TOAs) and the distances for two cases either by assuming known or unknown transmitter and receiver vertical distances. We propose two estimators, namely the maximum likelihood estimator and an estimator by employing the method of moments. To have an evaluation basis for these methods, we calculate the Cramer-Rao lower bound (CRLB) for the performance of the estimations. We show that the proposed model and estimations result in a superior performance in positioning when the transmitter and receiver are perfectly synchronized in comparison to the existing state-of-the-art counterparts. Moreover, the corresponding CRLB of the proposed model represents almost about 20 dB reduction in the localization error bound in comparison with the previous model for some practical scenarios.
SEM Based CARMA Time Series Modeling for Arbitrary N.
Oud, Johan H L; Voelkle, Manuel C; Driver, Charles C
2018-01-01
This article explains in detail the state space specification and estimation of first and higher-order autoregressive moving-average models in continuous time (CARMA) in an extended structural equation modeling (SEM) context for N = 1 as well as N > 1. To illustrate the approach, simulations will be presented in which a single panel model (T = 41 time points) is estimated for a sample of N = 1,000 individuals as well as for samples of N = 100 and N = 50 individuals, followed by estimating 100 separate models for each of the one-hundred N = 1 cases in the N = 100 sample. Furthermore, we will demonstrate how to test the difference between the full panel model and each N = 1 model by means of a subject-group-reproducibility test. Finally, the proposed analyses will be applied in an empirical example, in which the relationships between mood at work and mood at home are studied in a sample of N = 55 women. All analyses are carried out by ctsem, an R-package for continuous time modeling, interfacing to OpenMx.
NASA Astrophysics Data System (ADS)
Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei
2018-01-01
Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.
NASA Astrophysics Data System (ADS)
Lima, C. H.; Lall, U.
2010-12-01
Flood frequency statistical analysis most often relies on stationary assumptions, where distribution moments (e.g. mean, standard deviation) and associated flood quantiles do not change over time. In this sense, one expects that flood magnitudes and their frequency of occurrence will remain constant as observed in the historical information. However, evidence of inter-annual and decadal climate variability and anthropogenic change as well as an apparent increase in the number and magnitude of flood events across the globe have made the stationary assumption questionable. Here, we show how to estimate flood quantiles (e.g. 100-year flood) at ungauged basins without needing to consider stationarity. A statistical model based on the well known flow-area scaling law is proposed to estimate flood flows at ungauged basins. The slope and intercept scaling law coefficients are assumed time varying and a hierarchical Bayesian model is used to include climate information and reduce parameter uncertainties. Cross-validated results from 34 streamflow gauges located in a nested Basin in Brazil show that the proposed model is able to estimate flood quantiles at ungauged basins with remarkable skills compared with data based estimates using the full record. The model as developed in this work is also able to simulate sequences of flood flows considering global climate changes provided an appropriate climate index developed from the General Circulation Model is used as a predictor. The time varying flood frequency estimates can be used for pricing insurance models, and in a forecast mode for preparations for flooding, and finally, for timing infrastructure investments and location. Non-stationary 95% interval estimation for the 100-year Flood (shaded gray region) and 95% interval for the 100-year flood estimated from data (horizontal dashed and solid lines). The average distribution of the 100-year flood is shown in green in the right side.
A new approach to estimate time-to-cure from cancer registries data.
Boussari, Olayidé; Romain, Gaëlle; Remontet, Laurent; Bossard, Nadine; Mounier, Morgane; Bouvier, Anne-Marie; Binquet, Christine; Colonna, Marc; Jooste, Valérie
2018-04-01
Cure models have been adapted to net survival context to provide important indicators from population-based cancer data, such as the cure fraction and the time-to-cure. However existing methods for computing time-to-cure suffer from some limitations. Cure models in net survival framework were briefly overviewed and a new definition of time-to-cure was introduced as the time TTC at which P(t), the estimated covariate-specific probability of being cured at a given time t after diagnosis, reaches 0.95. We applied flexible parametric cure models to data of four cancer sites provided by the French network of cancer registries (FRANCIM). Then estimates of the time-to-cure by TTC and by two existing methods were derived and compared. Cure fractions and probabilities P(t) were also computed. Depending on the age group, TTC ranged from to 8 to 10 years for colorectal and pancreatic cancer and was nearly 12 years for breast cancer. In thyroid cancer patients under 55 years at diagnosis, TTC was strikingly 0: the probability of being cured was >0.95 just after diagnosis. This is an interesting result regarding the health insurance premiums of these patients. The estimated values of time-to-cure from the three approaches were close for colorectal cancer only. We propose a new approach, based on estimated covariate-specific probability of being cured, to estimate time-to-cure. Compared to two existing methods, the new approach seems to be more intuitive and natural and less sensitive to the survival time distribution. Copyright © 2018 Elsevier Ltd. All rights reserved.
A quantile regression model for failure-time data with time-dependent covariates
Gorfine, Malka; Goldberg, Yair; Ritov, Ya’acov
2017-01-01
Summary Since survival data occur over time, often important covariates that we wish to consider also change over time. Such covariates are referred as time-dependent covariates. Quantile regression offers flexible modeling of survival data by allowing the covariates to vary with quantiles. This article provides a novel quantile regression model accommodating time-dependent covariates, for analyzing survival data subject to right censoring. Our simple estimation technique assumes the existence of instrumental variables. In addition, we present a doubly-robust estimator in the sense of Robins and Rotnitzky (1992, Recovery of information and adjustment for dependent censoring using surrogate markers. In: Jewell, N. P., Dietz, K. and Farewell, V. T. (editors), AIDS Epidemiology. Boston: Birkhaäuser, pp. 297–331.). The asymptotic properties of the estimators are rigorously studied. Finite-sample properties are demonstrated by a simulation study. The utility of the proposed methodology is demonstrated using the Stanford heart transplant dataset. PMID:27485534
Estimation of sojourn time in chronic disease screening without data on interval cases.
Chen, T H; Kuo, H S; Yen, M F; Lai, M S; Tabar, L; Duffy, S W
2000-03-01
Estimation of the sojourn time on the preclinical detectable period in disease screening or transition rates for the natural history of chronic disease usually rely on interval cases (diagnosed between screens). However, to ascertain such cases might be difficult in developing countries due to incomplete registration systems and difficulties in follow-up. To overcome this problem, we propose three Markov models to estimate parameters without using interval cases. A three-state Markov model, a five-state Markov model related to regional lymph node spread, and a five-state Markov model pertaining to tumor size are applied to data on breast cancer screening in female relatives of breast cancer cases in Taiwan. Results based on a three-state Markov model give mean sojourn time (MST) 1.90 (95% CI: 1.18-4.86) years for this high-risk group. Validation of these models on the basis of data on breast cancer screening in the age groups 50-59 and 60-69 years from the Swedish Two-County Trial shows the estimates from a three-state Markov model that does not use interval cases are very close to those from previous Markov models taking interval cancers into account. For the five-state Markov model, a reparameterized procedure using auxiliary information on clinically detected cancers is performed to estimate relevant parameters. A good fit of internal and external validation demonstrates the feasibility of using these models to estimate parameters that have previously required interval cancers. This method can be applied to other screening data in which there are no data on interval cases.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
Analysis models for the estimation of oceanic fields
NASA Technical Reports Server (NTRS)
Carter, E. F.; Robinson, A. R.
1987-01-01
A general model for statistically optimal estimates is presented for dealing with scalar, vector and multivariate datasets. The method deals with anisotropic fields and treats space and time dependence equivalently. Problems addressed include the analysis, or the production of synoptic time series of regularly gridded fields from irregular and gappy datasets, and the estimate of fields by compositing observations from several different instruments and sampling schemes. Technical issues are discussed, including the convergence of statistical estimates, the choice of representation of the correlations, the influential domain of an observation, and the efficiency of numerical computations.
Jewett, Ethan M.; Steinrücken, Matthias; Song, Yun S.
2016-01-01
Many approaches have been developed for inferring selection coefficients from time series data while accounting for genetic drift. These approaches have been motivated by the intuition that properly accounting for the population size history can significantly improve estimates of selective strengths. However, the improvement in inference accuracy that can be attained by modeling drift has not been characterized. Here, by comparing maximum likelihood estimates of selection coefficients that account for the true population size history with estimates that ignore drift by assuming allele frequencies evolve deterministically in a population of infinite size, we address the following questions: how much can modeling the population size history improve estimates of selection coefficients? How much can mis-inferred population sizes hurt inferences of selection coefficients? We conduct our analysis under the discrete Wright–Fisher model by deriving the exact probability of an allele frequency trajectory in a population of time-varying size and we replicate our results under the diffusion model. For both models, we find that ignoring drift leads to estimates of selection coefficients that are nearly as accurate as estimates that account for the true population history, even when population sizes are small and drift is high. This result is of interest because inference methods that ignore drift are widely used in evolutionary studies and can be many orders of magnitude faster than methods that account for population sizes. PMID:27550904
Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan
2017-01-01
Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.
Estimation of river and stream temperature trends under haphazard sampling
Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao
2015-01-01
Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.
Real-time 3-D space numerical shake prediction for earthquake early warning
NASA Astrophysics Data System (ADS)
Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang
2017-12-01
In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.
Evaluating abundance and trends in a Hawaiian avian community using state-space analysis
Camp, Richard J.; Brinck, Kevin W.; Gorresen, P.M.; Paxton, Eben H.
2016-01-01
Estimating population abundances and patterns of change over time are important in both ecology and conservation. Trend assessment typically entails fitting a regression to a time series of abundances to estimate population trajectory. However, changes in abundance estimates from year-to-year across time are due to both true variation in population size (process variation) and variation due to imperfect sampling and model fit. State-space models are a relatively new method that can be used to partition the error components and quantify trends based only on process variation. We compare a state-space modelling approach with a more traditional linear regression approach to assess trends in uncorrected raw counts and detection-corrected abundance estimates of forest birds at Hakalau Forest National Wildlife Refuge, Hawai‘i. Most species demonstrated similar trends using either method. In general, evidence for trends using state-space models was less strong than for linear regression, as measured by estimates of precision. However, while the state-space models may sacrifice precision, the expectation is that these estimates provide a better representation of the real world biological processes of interest because they are partitioning process variation (environmental and demographic variation) and observation variation (sampling and model variation). The state-space approach also provides annual estimates of abundance which can be used by managers to set conservation strategies, and can be linked to factors that vary by year, such as climate, to better understand processes that drive population trends.
ERIC Educational Resources Information Center
Zvoch, Keith
2016-01-01
Piecewise growth models (PGMs) were used to estimate and model changes in the preliteracy skill development of kindergartners in a moderately sized school district in the Pacific Northwest. PGMs were applied to interrupted time-series (ITS) data that arose within the context of a response-to-intervention (RtI) instructional framework. During the…
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1995-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) (vertical bar)/x), 1 less than or equal to i isless than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1993-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) perpendicular to x), 1 less than or equal to i is less than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
Analyzing developmental processes on an individual level using nonstationary time series modeling.
Molenaar, Peter C M; Sinclair, Katerina O; Rovine, Michael J; Ram, Nilam; Corneal, Sherry E
2009-01-01
Individuals change over time, often in complex ways. Generally, studies of change over time have combined individuals into groups for analysis, which is inappropriate in most, if not all, studies of development. The authors explain how to identify appropriate levels of analysis (individual vs. group) and demonstrate how to estimate changes in developmental processes over time using a multivariate nonstationary time series model. They apply this model to describe the changing relationships between a biological son and father and a stepson and stepfather at the individual level. The authors also explain how to use an extended Kalman filter with iteration and smoothing estimator to capture how dynamics change over time. Finally, they suggest further applications of the multivariate nonstationary time series model and detail the next steps in the development of statistical models used to analyze individual-level data.
A simple model for DSS-14 outage times
NASA Technical Reports Server (NTRS)
Rumsey, H. C.; Stevens, R.; Posner, E. C.
1989-01-01
A model is proposed to describe DSS-14 outage times. Discrepancy Reporting System outage data for the period from January 1986 through September 1988 are used to estimate the parameters of the model. The model provides a probability distribution for the duration of outages, which agrees well with observed data. The model depends only on a small number of parameters, and has some heuristic justification. This shows that the Discrepancy Reporting System in the Deep Space Network (DSN) can be used to estimate the probability of extended outages in spite of the discrepancy reports ending when the pass ends. The probability of an outage extending beyond the end of a pass is estimated as around 5 percent.
Fleischman, Ross J.; Lundquist, Mark; Jui, Jonathan; Newgard, Craig D.; Warden, Craig
2014-01-01
Objective To derive and validate a model that accurately predicts ambulance arrival time that could be implemented as a Google Maps web application. Methods This was a retrospective study of all scene transports in Multnomah County, Oregon, from January 1 through December 31, 2008. Scene and destination hospital addresses were converted to coordinates. ArcGIS Network Analyst was used to estimate transport times based on street network speed limits. We then created a linear regression model to improve the accuracy of these street network estimates using weather, patient characteristics, use of lights and sirens, daylight, and rush-hour intervals. The model was derived from a 50% sample and validated on the remainder. Significance of the covariates was determined by p < 0.05 for a t-test of the model coefficients. Accuracy was quantified by the proportion of estimates that were within 5 minutes of the actual transport times recorded by computer-aided dispatch. We then built a Google Maps-based web application to demonstrate application in real-world EMS operations. Results There were 48,308 included transports. Street network estimates of transport time were accurate within 5 minutes of actual transport time less than 16% of the time. Actual transport times were longer during daylight and rush-hour intervals and shorter with use of lights and sirens. Age under 18 years, gender, wet weather, and trauma system entry were not significant predictors of transport time. Our model predicted arrival time within 5 minutes 73% of the time. For lights and sirens transports, accuracy was within 5 minutes 77% of the time. Accuracy was identical in the validation dataset. Lights and sirens saved an average of 3.1 minutes for transports under 8.8 minutes, and 5.3 minutes for longer transports. Conclusions An estimate of transport time based only on a street network significantly underestimated transport times. A simple model incorporating few variables can predict ambulance time of arrival to the emergency department with good accuracy. This model could be linked to global positioning system data and an automated Google Maps web application to optimize emergency department resource use. Use of lights and sirens had a significant effect on transport times. PMID:23865736
Doubova, Svetlana V; Ramírez-Sánchez, Claudine; Figueroa-Lara, Alejandro; Pérez-Cuevas, Ricardo
2013-12-01
To estimate the requirements of human resources (HR) of two models of care for diabetes patients: conventional and specific, also called DiabetIMSS, which are provided in primary care clinics of the Mexican Institute of Social Security (IMSS). An evaluative research was conducted. An expert group identified the HR activities and time required to provide healthcare consistent with the best clinical practices for diabetic patients. HR were estimated by using the evidence-based adjusted service target approach for health workforce planning; then, comparisons between existing and estimated HRs were made. To provide healthcare in accordance with the patients' metabolic control, the conventional model required increasing the number of family doctors (1.2 times) nutritionists (4.2 times) and social workers (4.1 times). The DiabetIMSS model requires greater increase than the conventional model. Increasing HR is required to provide evidence-based healthcare to diabetes patients.
O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin
2015-06-01
Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.
NASA Astrophysics Data System (ADS)
Husain, Hartina; Astuti Thamrin, Sri; Tahir, Sulaiha; Mukhlisin, Ahmad; Mirna Apriani, M.
2018-03-01
Breast cancer is one type of cancer that is the leading cause of death worldwide. This study aims to model the factors that affect the survival time and rate of cure of breast cancer patients. The extended cox model, which is a modification of the proportional hazard cox model in which the proportional hazard assumptions are not met, is used in this study. The maximum likelihood estimation approach is used to estimate the parameters of the model. This method is then applied to medical record data of breast cancer patient in 2011-2016, which is taken from Hasanuddin University Education Hospital. The results obtained indicate that the factors that affect the survival time of breast cancer patients are malignancy and leukocyte levels.
NASA Technical Reports Server (NTRS)
Currit, P. A.
1983-01-01
The Cleanroom software development methodology is designed to take the gamble out of product releases for both suppliers and receivers of the software. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (Mean Time To Failure) of the product at the time of its release. A statistical approach to software product testing using randomly selected samples of test cases is considered. A statistical model is defined for the certification process which uses the timing data recorded during test. A reasonableness argument for this model is provided that uses previously published data on software product execution. Also included is a derivation of the certification model estimators and a comparison of the proposed least squares technique with the more commonly used maximum likelihood estimators.
BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...
Browne, William J; Steele, Fiona; Golalizadeh, Mousa; Green, Martin J
2009-06-01
We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.
Statistical properties of Fourier-based time-lag estimates
NASA Astrophysics Data System (ADS)
Epitropakis, A.; Papadakis, I. E.
2016-06-01
Context. The study of X-ray time-lag spectra in active galactic nuclei (AGN) is currently an active research area, since it has the potential to illuminate the physics and geometry of the innermost region (I.e. close to the putative super-massive black hole) in these objects. To obtain reliable information from these studies, the statistical properties of time-lags estimated from data must be known as accurately as possible. Aims: We investigated the statistical properties of Fourier-based time-lag estimates (I.e. based on the cross-periodogram), using evenly sampled time series with no missing points. Our aim is to provide practical "guidelines" on estimating time-lags that are minimally biased (I.e. whose mean is close to their intrinsic value) and have known errors. Methods: Our investigation is based on both analytical work and extensive numerical simulations. The latter consisted of generating artificial time series with various signal-to-noise ratios and sampling patterns/durations similar to those offered by AGN observations with present and past X-ray satellites. We also considered a range of different model time-lag spectra commonly assumed in X-ray analyses of compact accreting systems. Results: Discrete sampling, binning and finite light curve duration cause the mean of the time-lag estimates to have a smaller magnitude than their intrinsic values. Smoothing (I.e. binning over consecutive frequencies) of the cross-periodogram can add extra bias at low frequencies. The use of light curves with low signal-to-noise ratio reduces the intrinsic coherence, and can introduce a bias to the sample coherence, time-lag estimates, and their predicted error. Conclusions: Our results have direct implications for X-ray time-lag studies in AGN, but can also be applied to similar studies in other research fields. We find that: a) time-lags should be estimated at frequencies lower than ≈ 1/2 the Nyquist frequency to minimise the effects of discrete binning of the observed time series; b) smoothing of the cross-periodogram should be avoided, as this may introduce significant bias to the time-lag estimates, which can be taken into account by assuming a model cross-spectrum (and not just a model time-lag spectrum); c) time-lags should be estimated by dividing observed time series into a number, say m, of shorter data segments and averaging the resulting cross-periodograms; d) if the data segments have a duration ≳ 20 ks, the time-lag bias is ≲15% of its intrinsic value for the model cross-spectra and power-spectra considered in this work. This bias should be estimated in practise (by considering possible intrinsic cross-spectra that may be applicable to the time-lag spectra at hand) to assess the reliability of any time-lag analysis; e) the effects of experimental noise can be minimised by only estimating time-lags in the frequency range where the sample coherence is larger than 1.2/(1 + 0.2m). In this range, the amplitude of noise variations caused by measurement errors is smaller than the amplitude of the signal's intrinsic variations. As long as m ≳ 20, time-lags estimated by averaging over individual data segments have analytical error estimates that are within 95% of the true scatter around their mean, and their distribution is similar, albeit not identical, to a Gaussian.
Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference
Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.
2016-01-01
Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243
Aralis, Hilary; Brookmeyer, Ron
2017-01-01
Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
A data mining framework for time series estimation.
Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin
2010-04-01
Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.
Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.
Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J
2018-03-01
Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.
Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-07-01
The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.
Raj, Retheep; Sivanandan, K S
2017-01-01
Estimation of elbow dynamics has been the object of numerous investigations. In this work a solution is proposed for estimating elbow movement velocity and elbow joint angle from Surface Electromyography (SEMG) signals. Here the Surface Electromyography signals are acquired from the biceps brachii muscle of human hand. Two time-domain parameters, Integrated EMG (IEMG) and Zero Crossing (ZC), are extracted from the Surface Electromyography signal. The relationship between the time domain parameters, IEMG and ZC with elbow angular displacement and elbow angular velocity during extension and flexion of the elbow are studied. A multiple input-multiple output model is derived for identifying the kinematics of elbow. A Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural network (MLPNN) model is proposed for the estimation of elbow joint angle and elbow angular velocity. The proposed NARX MLPNN model is trained using Levenberg-marquardt based algorithm. The proposed model is estimating the elbow joint angle and elbow movement angular velocity with appreciable accuracy. The model is validated using regression coefficient value (R). The average regression coefficient value (R) obtained for elbow angular displacement prediction is 0.9641 and for the elbow anglular velocity prediction is 0.9347. The Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural networks (MLPNN) model can be used for the estimation of angular displacement and movement angular velocity of the elbow with good accuracy.
Validation of drift and diffusion coefficients from experimental data
NASA Astrophysics Data System (ADS)
Riera, R.; Anteneodo, C.
2010-04-01
Many fluctuation phenomena, in physics and other fields, can be modeled by Fokker-Planck or stochastic differential equations whose coefficients, associated with drift and diffusion components, may be estimated directly from the observed time series. Its correct characterization is crucial to determine the system quantifiers. However, due to the finite sampling rates of real data, the empirical estimates may significantly differ from their true functional forms. In the literature, low-order corrections, or even no corrections, have been applied to the finite-time estimates. A frequent outcome consists of linear drift and quadratic diffusion coefficients. For this case, exact corrections have been recently found, from Itô-Taylor expansions. Nevertheless, model validation constitutes a necessary step before determining and applying the appropriate corrections. Here, we exploit the consequences of the exact theoretical results obtained for the linear-quadratic model. In particular, we discuss whether the observed finite-time estimates are actually a manifestation of that model. The relevance of this analysis is put into evidence by its application to two contrasting real data examples in which finite-time linear drift and quadratic diffusion coefficients are observed. In one case the linear-quadratic model is readily rejected while in the other, although the model constitutes a very good approximation, low-order corrections are inappropriate. These examples give warning signs about the proper interpretation of finite-time analysis even in more general diffusion processes.
Callahan, Melissa S; McPeek, Mark A
2016-01-01
Reconstructing evolutionary patterns of species and populations provides a framework for asking questions about the impacts of climate change. Here we use a multilocus dataset to estimate gene trees under maximum likelihood and Bayesian models to obtain a robust estimate of relationships for a genus of North American damselflies, Enallagma. Using a relaxed molecular clock, we estimate the divergence times for this group. Furthermore, to account for the fact that gene tree analyses can overestimate ages of population divergences, we use a multi-population coalescent model to gain a more accurate estimate of divergence times. We also infer diversification rates using a method that allows for variation in diversification rate through time and among lineages. Our results reveal a complex evolutionary history of Enallagma, in which divergence events both predate and occur during Pleistocene climate fluctuations. There is also evidence of diversification rate heterogeneity across the tree. These divergence time estimates provide a foundation for addressing the relative significance of historical climatic events in the diversification of this genus. Copyright © 2015 Elsevier Inc. All rights reserved.
Ouma, Paul O; Agutu, Nathan O; Snow, Robert W; Noor, Abdisalan M
2017-09-18
Precise quantification of health service utilisation is important for the estimation of disease burden and allocation of health resources. Current approaches to mapping health facility utilisation rely on spatial accessibility alone as the predictor. However, other spatially varying social, demographic and economic factors may affect the use of health services. The exclusion of these factors can lead to the inaccurate estimation of health facility utilisation. Here, we compare the accuracy of a univariate spatial model, developed only from estimated travel time, to a multivariate model that also includes relevant social, demographic and economic factors. A theoretical surface of travel time to the nearest public health facility was developed. These were assigned to each child reported to have had fever in the Kenya demographic and health survey of 2014 (KDHS 2014). The relationship of child treatment seeking for fever with travel time, household and individual factors from the KDHS2014 were determined using multilevel mixed modelling. Bayesian information criterion (BIC) and likelihood ratio test (LRT) tests were carried out to measure how selected factors improve parsimony and goodness of fit of the time model. Using the mixed model, a univariate spatial model of health facility utilisation was fitted using travel time as the predictor. The mixed model was also used to compute a multivariate spatial model of utilisation, using travel time and modelled surfaces of selected household and individual factors as predictors. The univariate and multivariate spatial models were then compared using the receiver operating area under the curve (AUC) and a percent correct prediction (PCP) test. The best fitting multivariate model had travel time, household wealth index and number of children in household as the predictors. These factors reduced BIC of the time model from 4008 to 2959, a change which was confirmed by the LRT test. Although there was a high correlation of the two modelled probability surfaces (Adj R 2 = 88%), the multivariate model had better AUC compared to the univariate model; 0.83 versus 0.73 and PCP 0.61 versus 0.45 values. Our study shows that a model that uses travel time, as well as household and individual-level socio-demographic factors, results in a more accurate estimation of use of health facilities for the treatment of childhood fever, compared to one that relies on only travel time.
Sun, Jianguo; Feng, Yanqin; Zhao, Hui
2015-01-01
Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.
Sudell, Maria; Tudur Smith, Catrin; Gueyffier, François; Kolamunnage-Dona, Ruwanthi
2018-04-15
Joint modelling of longitudinal and time-to-event data is often preferred over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta-analysis of joint model estimates from multiple studies. We propose a 2-stage method for meta-analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta-analyses of separate longitudinal or time-to-event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta-analytic setting where association exists between the longitudinal and time-to-event outcomes. Where evidence of association between longitudinal and time-to-event outcomes exists, results from joint models over standalone analyses should be pooled in 2-stage meta-analyses. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Nüske, Feliks; Wu, Hao; Prinz, Jan-Hendrik; Wehmeyer, Christoph; Clementi, Cecilia; Noé, Frank
2017-03-01
Many state-of-the-art methods for the thermodynamic and kinetic characterization of large and complex biomolecular systems by simulation rely on ensemble approaches, where data from large numbers of relatively short trajectories are integrated. In this context, Markov state models (MSMs) are extremely popular because they can be used to compute stationary quantities and long-time kinetics from ensembles of short simulations, provided that these short simulations are in "local equilibrium" within the MSM states. However, over the last 15 years since the inception of MSMs, it has been controversially discussed and not yet been answered how deviations from local equilibrium can be detected, whether these deviations induce a practical bias in MSM estimation, and how to correct for them. In this paper, we address these issues: We systematically analyze the estimation of MSMs from short non-equilibrium simulations, and we provide an expression for the error between unbiased transition probabilities and the expected estimate from many short simulations. We show that the unbiased MSM estimate can be obtained even from relatively short non-equilibrium simulations in the limit of long lag times and good discretization. Further, we exploit observable operator model (OOM) theory to derive an unbiased estimator for the MSM transition matrix that corrects for the effect of starting out of equilibrium, even when short lag times are used. Finally, we show how the OOM framework can be used to estimate the exact eigenvalues or relaxation time scales of the system without estimating an MSM transition matrix, which allows us to practically assess the discretization quality of the MSM. Applications to model systems and molecular dynamics simulation data of alanine dipeptide are included for illustration. The improved MSM estimator is implemented in PyEMMA of version 2.3.
2013-01-01
Background When mathematical modelling is applied to many different application areas, a common task is the estimation of states and parameters based on measurements. With this kind of inference making, uncertainties in the time when the measurements have been taken are often neglected, but especially in applications taken from the life sciences, this kind of errors can considerably influence the estimation results. As an example in the context of personalized medicine, the model-based assessment of the effectiveness of drugs is becoming to play an important role. Systems biology may help here by providing good pharmacokinetic and pharmacodynamic (PK/PD) models. Inference on these systems based on data gained from clinical studies with several patient groups becomes a major challenge. Particle filters are a promising approach to tackle these difficulties but are by itself not ready to handle uncertainties in measurement times. Results In this article, we describe a variant of the standard particle filter (PF) algorithm which allows state and parameter estimation with the inclusion of measurement time uncertainties (MTU). The modified particle filter, which we call MTU-PF, also allows the application of an adaptive stepsize choice in the time-continuous case to avoid degeneracy problems. The modification is based on the model assumption of uncertain measurement times. While the assumption of randomness in the measurements themselves is common, the corresponding measurement times are generally taken as deterministic and exactly known. Especially in cases where the data are gained from measurements on blood or tissue samples, a relatively high uncertainty in the true measurement time seems to be a natural assumption. Our method is appropriate in cases where relatively few data are used from a relatively large number of groups or individuals, which introduce mixed effects in the model. This is a typical setting of clinical studies. We demonstrate the method on a small artificial example and apply it to a mixed effects model of plasma-leucine kinetics with data from a clinical study which included 34 patients. Conclusions Comparisons of our MTU-PF with the standard PF and with an alternative Maximum Likelihood estimation method on the small artificial example clearly show that the MTU-PF obtains better estimations. Considering the application to the data from the clinical study, the MTU-PF shows a similar performance with respect to the quality of estimated parameters compared with the standard particle filter, but besides that, the MTU algorithm shows to be less prone to degeneration than the standard particle filter. PMID:23331521
Inferring invasive species abundance using removal data from management actions
Davis, Amy J.; Hooten, Mevin B.; Miller, Ryan S.; Farnsworth, Matthew L.; Lewis, Jesse S.; Moxcey, Michael; Pepin, Kim M.
2016-01-01
Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480–19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates.
Craig, Benjamin M; Busschbach, Jan JV
2009-01-01
Background To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. Methods First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common instant RUM. For the interpretation of time trade-off (TTO) responses, we show that the episodic model implies a coefficient estimator, and the instant model implies a mean slope estimator. Secondly, we demonstrate these estimators and the differences between the estimates for 42 health states using TTO responses from the seminal Measurement and Valuation in Health (MVH) study conducted in the United Kingdom. Mean slopes are estimates with and without Dolan's transformation of worse-than-death (WTD) responses. Finally, we demonstrate an exploded probit estimator, an extension of the coefficient estimator for discrete choice data that accommodates both TTO and rank responses. Results By construction, mean slopes are less than or equal to coefficients, because slopes are fractions and, therefore, magnify downward errors in WTD responses. The Dolan transformation of WTD responses causes mean slopes to increase in similarity to coefficient estimates, yet they are not equivalent (i.e., absolute mean difference = 0.179). Unlike mean slopes, coefficient estimates demonstrate strong concordance with rank-based predictions (Lin's rho = 0.91). Combining TTO and rank responses under the exploded probit model improves the identification of health state values, decreasing the average width of confidence intervals from 0.057 to 0.041 compared to TTO only results. Conclusion The episodic RUM expands upon the theoretical framework underlying health state valuation and contributes to health econometrics by motivating the selection of coefficient and exploded probit estimators for the analysis of TTO and rank responses. In future MVH surveys, sample size requirements may be reduced through the incorporation of multiple responses under a single estimator. PMID:19144115
Hagar, Yolanda C; Harvey, Danielle J; Beckett, Laurel A
2016-08-30
We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left-censored and right-censored, and some individuals are never screened (the 'cured' population). We propose a multivariate parametric cure model that can be used with left-censored and right-censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within-subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER-Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo
2017-01-01
Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554
Indirect rotor position sensing in real time for brushless permanent magnet motor drives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ertugrul, N.; Acarnley, P.P.
1998-07-01
This paper describes a modern solution to real-time rotor position estimation of brushless permanent magnet (PM) motor drives. The position estimation scheme, based on flux linkage and line-current estimation, is implemented in real time by using the abc reference frame, and it is tested dynamically. The position estimation model of the test motor, development of hardware, and basic operation of the digital signal processor (DSP) are discussed. The overall position estimation strategy is accomplished with a fast DSP (TMS320C30). The method is a shaft position sensorless method that is applicable to a wide range of excitation types in brushless PMmore » motors without any restriction on the motor model and the current excitation. Both rectangular and sinewave-excited brushless PM motor drives are examined, and the results are given to demonstrate the effectiveness of the method with dynamic loads in closed estimated position loop.« less
Almirall, Daniel; Griffin, Beth Ann; McCaffrey, Daniel F.; Ramchand, Rajeev; Yuen, Robert A.; Murphy, Susan A.
2014-01-01
This article considers the problem of examining time-varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time-varying causal effects of interest in a conditional mean model for a continuous response given time-varying treatments and moderators. We present an easy-to-use estimator of the SNMM that combines an existing regression-with-residuals (RR) approach with an inverse-probability-of-treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time-varying causal effects if the time-varying moderators are also the sole time-varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time-varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time-varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time-varying moderators and time-varying confounders. We illustrate the methodology in a case study to assess if time-varying substance use moderates treatment effects on future substance use. PMID:23873437
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610
Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.
Zhao, Ziyue; Liu, Congfeng
2014-01-01
In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.
Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook
2015-01-01
Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374
NASA Technical Reports Server (NTRS)
Hartman, Brian Davis
1995-01-01
A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.
Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick; Klein, Vladislav
2011-01-01
Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.
Regression analysis of sparse asynchronous longitudinal data
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P.
2015-01-01
Summary We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus. PMID:26568699
Quantile Regression Models for Current Status Data
Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen
2016-01-01
Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307
Linear estimation of coherent structures in wall-bounded turbulence at Re τ = 2000
NASA Astrophysics Data System (ADS)
Oehler, S.; Garcia–Gutiérrez, A.; Illingworth, S.
2018-04-01
The estimation problem for a fully-developed turbulent channel flow at Re τ = 2000 is considered. Specifically, a Kalman filter is designed using a Navier–Stokes-based linear model. The estimator uses time-resolved velocity measurements at a single wall-normal location (provided by DNS) to estimate the time-resolved velocity field at other wall-normal locations. The estimator is able to reproduce the largest scales with reasonable accuracy for a range of wavenumber pairs, measurement locations and estimation locations. Importantly, the linear model is also able to predict with reasonable accuracy the performance that will be achieved by the estimator when applied to the DNS. A more practical estimation scheme using the shear stress at the wall as measurement is also considered. The estimator is still able to estimate the largest scales with reasonable accuracy, although the estimator’s performance is reduced.
Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob
2016-08-01
The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.
Innovative methods for calculation of freeway travel time using limited data : final report.
DOT National Transportation Integrated Search
2008-01-01
Description: Travel time estimations created by processing of simulated freeway loop detector data using proposed method have been compared with travel times reported from VISSIM model. An improved methodology was proposed to estimate freeway corrido...
Bernard, Olivier; Alata, Olivier; Francaux, Marc
2006-03-01
Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated Vo2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant tau1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and tau1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant tau2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and tau2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.
A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.
Houseman, E Andres; Virji, M Abbas
2017-08-01
Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates were significant in some frequentist models, but in the Bayesian model their credible intervals contained zero; such discrepancies were observed in multiple datasets. Variance components from the Bayesian model reflected substantial autocorrelation, consistent with the frequentist models, except for the auto-regressive moving average model. Plots of means from the Bayesian model showed good fit to the observed data. The proposed Bayesian model provides an approach for modeling non-stationary autocorrelation in a hierarchical modeling framework to estimate task means, standard deviations, quantiles, and parameter estimates for covariates that are less biased and have better performance characteristics than some of the contemporary methods. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.
Thornton, P. K.; Bowen, W. T.; Ravelo, A.C.; Wilkens, P. W.; Farmer, G.; Brock, J.; Brink, J. E.
1997-01-01
Early warning of impending poor crop harvests in highly variable environments can allow policy makers the time they need to take appropriate action to ameliorate the effects of regional food shortages on vulnerable rural and urban populations. Crop production estimates for the current season can be obtained using crop simulation models and remotely sensed estimates of rainfall in real time, embedded in a geographic information system that allows simple analysis of simulation results. A prototype yield estimation system was developed for the thirty provinces of Burkina Faso. It is based on CERES-Millet, a crop simulation model of the growth and development of millet (Pennisetum spp.). The prototype was used to estimate millet production in contrasting seasons and to derive production anomaly estimates for the 1986 season. Provincial yields simulated halfway through the growing season were generally within 15% of their final (end-of-season) values. Although more work is required to produce an operational early warning system of reasonable credibility, the methodology has considerable potential for providing timely estimates of regional production of the major food crops in countries of sub-Saharan Africa.
Payn, Robert A.; Hall, Robert O Jr.; Kennedy, Theodore A.; Poole, Geoff C; Marshall, Lucy A.
2017-01-01
Conventional methods for estimating whole-stream metabolic rates from measured dissolved oxygen dynamics do not account for the variation in solute transport times created by dynamic flow conditions. Changes in flow at hourly time scales are common downstream of hydroelectric dams (i.e. hydropeaking), and hydrologic limitations of conventional metabolic models have resulted in a poor understanding of the controls on biological production in these highly managed river ecosystems. To overcome these limitations, we coupled a two-station metabolic model of dissolved oxygen dynamics with a hydrologic river routing model. We designed calibration and parameter estimation tools to infer values for hydrologic and metabolic parameters based on time series of water quality data, achieving the ultimate goal of estimating whole-river gross primary production and ecosystem respiration during dynamic flow conditions. Our case study data for model design and calibration were collected in the tailwater of Glen Canyon Dam (Arizona, USA), a large hydropower facility where the mean discharge was 325 m3 s 1 and the average daily coefficient of variation of flow was 0.17 (i.e. the hydropeaking index averaged from 2006 to 2016). We demonstrate the coupled model’s conceptual consistency with conventional models during steady flow conditions, and illustrate the potential bias in metabolism estimates with conventional models during unsteady flow conditions. This effort contributes an approach to solute transport modeling and parameter estimation that allows study of whole-ecosystem metabolic regimes across a more diverse range of hydrologic conditions commonly encountered in streams and rivers.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Methodology for estimation of time-dependent surface heat flux due to cryogen spray cooling.
Tunnell, James W; Torres, Jorge H; Anvari, Bahman
2002-01-01
Cryogen spray cooling (CSC) is an effective technique to protect the epidermis during cutaneous laser therapies. Spraying a cryogen onto the skin surface creates a time-varying heat flux, effectively cooling the skin during and following the cryogen spurt. In previous studies mathematical models were developed to predict the human skin temperature profiles during the cryogen spraying time. However, no studies have accounted for the additional cooling due to residual cryogen left on the skin surface following the spurt termination. We formulate and solve an inverse heat conduction (IHC) problem to predict the time-varying surface heat flux both during and following a cryogen spurt. The IHC formulation uses measured temperature profiles from within a medium to estimate the surface heat flux. We implement a one-dimensional sequential function specification method (SFSM) to estimate the surface heat flux from internal temperatures measured within an in vitro model in response to a cryogen spurt. Solution accuracy and experimental errors are examined using simulated temperature data. Heat flux following spurt termination appears substantial; however, it is less than that during the spraying time. The estimated time-varying heat flux can subsequently be used in forward heat conduction models to estimate temperature profiles in skin during and following a cryogen spurt and predict appropriate timing for onset of the laser pulse.
Ellersieck, Mark R., Amha Asfaw, Foster L. Mayer, Gary F. Krause, Kai Sun and Gunhee Lee. 2003. Acute-to-Chronic Estimation (ACE v2.0) with Time-Concentration-Effect Models: User Manual and Software. EPA/600/R-03/107. U.S. Environmental Protection Agency, National Health and Envi...
The use of models for estimating emissions from products beyond the timeframe of an emissions test is a means of managing the time and expenses associated with product emissions certification. This paper presents a discussion of (1) the impact of uncertainty in test chamber emiss...
Jamie S. Sanderlin; Peter M. Waser; James E. Hines; James D. Nichols
2012-01-01
Metapopulation ecology has historically been rich in theory, yet analytical approaches for inferring demographic relationships among local populations have been few. We show how reverse-time multi-state captureÂrecapture models can be used to estimate the importance of local recruitment and interpopulation dispersal to metapopulation growth. We use 'contribution...
Temporal validation for landsat-based volume estimation model
Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan
2015-01-01
Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...
Estimation of steady-state leakage current in polycrystalline PZT thin films
NASA Astrophysics Data System (ADS)
Podgorny, Yury; Vorotilov, Konstantin; Sigov, Alexander
2016-09-01
Estimation of the steady state (or "true") leakage current Js in polycrystalline ferroelectric PZT films with the use of the voltage-step technique is discussed. Curie-von Schweidler (CvS) and sum of exponents (Σ exp ) models are studied for current-time J (t) data fitting. Σ exp model (sum of three or two exponents) gives better fitting characteristics and provides good accuracy of Js estimation at reduced measurement time thus making possible to avoid film degradation, whereas CvS model is very sensitive to both start and finish time points and give in many cases incorrect results. The results give rise to suggest an existence of low-frequency relaxation processes in PZT films with characteristic duration of tens and hundreds of seconds.
Nichols, J.D.; Pollock, K.H.
1983-01-01
Capture-recapture models can be used to estimate parameters of interest from paleobiological data when encouter probabilities are unknown and variable over time. These models also permit estimation of sampling variances and goodness-of-fit tests are available for assessing the fit of data to most models. The authors describe capture-recapture models which should be useful in paleobiological analyses and discuss the assumptions which underlie them. They illustrate these models with examples and discuss aspects of study design.
ERIC Educational Resources Information Center
Henson, James M.; Reise, Steven P.; Kim, Kevin H.
2007-01-01
The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…
Transient modeling in simulation of hospital operations for emergency response.
Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li
2006-01-01
Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.
Stedman, Margaret R; Feuer, Eric J; Mariotto, Angela B
2014-11-01
The probability of cure is a long-term prognostic measure of cancer survival. Estimates of the cure fraction, the proportion of patients "cured" of the disease, are based on extrapolating survival models beyond the range of data. The objective of this work is to evaluate the sensitivity of cure fraction estimates to model choice and study design. Data were obtained from the Surveillance, Epidemiology, and End Results (SEER)-9 registries to construct a cohort of breast and colorectal cancer patients diagnosed from 1975 to 1985. In a sensitivity analysis, cure fraction estimates are compared from different study designs with short- and long-term follow-up. Methods tested include: cause-specific and relative survival, parametric mixture, and flexible models. In a separate analysis, estimates are projected for 2008 diagnoses using study designs including the full cohort (1975-2008 diagnoses) and restricted to recent diagnoses (1998-2008) with follow-up to 2009. We show that flexible models often provide higher estimates of the cure fraction compared to parametric mixture models. Log normal models generate lower estimates than Weibull parametric models. In general, 12 years is enough follow-up time to estimate the cure fraction for regional and distant stage colorectal cancer but not for breast cancer. 2008 colorectal cure projections show a 15% increase in the cure fraction since 1985. Estimates of the cure fraction are model and study design dependent. It is best to compare results from multiple models and examine model fit to determine the reliability of the estimate. Early-stage cancers are sensitive to survival type and follow-up time because of their longer survival. More flexible models are susceptible to slight fluctuations in the shape of the survival curve which can influence the stability of the estimate; however, stability may be improved by lengthening follow-up and restricting the cohort to reduce heterogeneity in the data. Published by Oxford University Press 2014.
NASA Astrophysics Data System (ADS)
Zapata, D.; Salazar, M.; Chaves, B.; Keller, M.; Hoogenboom, G.
2015-12-01
Thermal time models have been used to predict the development of many different species, including grapevine ( Vitis vinifera L.). These models normally assume that there is a linear relationship between temperature and plant development. The goal of this study was to estimate the base temperature and duration in terms of thermal time for predicting veraison for four grapevine cultivars. Historical phenological data for four cultivars that were collected in the Pacific Northwest were used to develop the thermal time model. Base temperatures ( T b) of 0 and 10 °C and the best estimated T b using three different methods were evaluated for predicting veraison in grapevine. Thermal time requirements for each individual cultivar were evaluated through analysis of variance, and means were compared using the Fisher's test. The methods that were applied to estimate T b for the development of wine grapes included the least standard deviation in heat units, the regression coefficient, and the development rate method. The estimated T b varied among methods and cultivars. The development rate method provided the lowest T b values for all cultivars. For the three methods, Chardonnay had the lowest T b ranging from 8.7 to 10.7 °C, while the highest T b values were obtained for Riesling and Cabernet Sauvignon with 11.8 and 12.8 °C, respectively. Thermal time also differed among cultivars, when either the fixed or estimated T b was used. Predictions of the beginning of ripening with the estimated temperature resulted in the lowest variation in real days when compared with predictions using T b = 0 or 10 °C, regardless of the method that was used to estimate the T b.
Phadnis, Milind A.; Shireman, Theresa I.; Wetmore, James B.; Rigler, Sally K.; Zhou, Xinhua; Spertus, John A.; Ellerbeck, Edward F.; Mahnken, Jonathan D.
2014-01-01
In a population of chronic dialysis patients with an extensive burden of cardiovascular disease, estimation of the effectiveness of cardioprotective medication in literature is based on calculation of a hazard ratio comparing hazard of mortality for two groups (with or without drug exposure) measured at a single point in time or through the cumulative metric of proportion of days covered (PDC) on medication. Though both approaches can be modeled in a time-dependent manner using a Cox regression model, we propose a more complete time-dependent metric for evaluating cardioprotective medication efficacy. We consider that drug effectiveness is potentially the result of interactions between three time-dependent covariate measures, current drug usage status (ON versus OFF), proportion of cumulative exposure to drug at a given point in time, and the patient’s switching behavior between taking and not taking the medication. We show that modeling of all three of these time-dependent measures illustrates more clearly how varying patterns of drug exposure affect drug effectiveness, which could remain obscured when modeled by the more standard single time-dependent covariate approaches. We propose that understanding the nature and directionality of these interactions will help the biopharmaceutical industry in better estimating drug efficacy. PMID:25343005
Phadnis, Milind A; Shireman, Theresa I; Wetmore, James B; Rigler, Sally K; Zhou, Xinhua; Spertus, John A; Ellerbeck, Edward F; Mahnken, Jonathan D
2014-01-01
In a population of chronic dialysis patients with an extensive burden of cardiovascular disease, estimation of the effectiveness of cardioprotective medication in literature is based on calculation of a hazard ratio comparing hazard of mortality for two groups (with or without drug exposure) measured at a single point in time or through the cumulative metric of proportion of days covered (PDC) on medication. Though both approaches can be modeled in a time-dependent manner using a Cox regression model, we propose a more complete time-dependent metric for evaluating cardioprotective medication efficacy. We consider that drug effectiveness is potentially the result of interactions between three time-dependent covariate measures, current drug usage status (ON versus OFF), proportion of cumulative exposure to drug at a given point in time, and the patient's switching behavior between taking and not taking the medication. We show that modeling of all three of these time-dependent measures illustrates more clearly how varying patterns of drug exposure affect drug effectiveness, which could remain obscured when modeled by the more standard single time-dependent covariate approaches. We propose that understanding the nature and directionality of these interactions will help the biopharmaceutical industry in better estimating drug efficacy.
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Dishman, W. K.
1982-01-01
A simple attenuation model (SAM) is presented for estimating rain-induced attenuation along an earth-space path. The rain model uses an effective spatial rain distribution which is uniform for low rain rates and which has an exponentially shaped horizontal rain profile for high rain rates. When compared to other models, the SAM performed well in the important region of low percentages of time, and had the lowest percent standard deviation of all percent time values tested.
Using the entire history in the analysis of nested case cohort samples.
Rivera, C L; Lumley, T
2016-08-15
Countermatching designs can provide more efficient estimates than simple matching or case-cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time-varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case-control designs in the presence of time-varying variables. A simulation study is carried out, which considers four different scenarios including a binary time-dependent variable, a continuous time-dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case-cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case-cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time-varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Aerodynamic Parameters of High Performance Aircraft Estimated from Wind Tunnel and Flight Test Data
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Murphy, Patrick C.
1998-01-01
A concept of system identification applied to high performance aircraft is introduced followed by a discussion on the identification methodology. Special emphasis is given to model postulation using time invariant and time dependent aerodynamic parameters, model structure determination and parameter estimation using ordinary least squares an mixed estimation methods, At the same time problems of data collinearity detection and its assessment are discussed. These parts of methodology are demonstrated in examples using flight data of the X-29A and X-31A aircraft. In the third example wind tunnel oscillatory data of the F-16XL model are used. A strong dependence of these data on frequency led to the development of models with unsteady aerodynamic terms in the form of indicial functions. The paper is completed by concluding remarks.
Jewett, Ethan M; Steinrücken, Matthias; Song, Yun S
2016-11-01
Many approaches have been developed for inferring selection coefficients from time series data while accounting for genetic drift. These approaches have been motivated by the intuition that properly accounting for the population size history can significantly improve estimates of selective strengths. However, the improvement in inference accuracy that can be attained by modeling drift has not been characterized. Here, by comparing maximum likelihood estimates of selection coefficients that account for the true population size history with estimates that ignore drift by assuming allele frequencies evolve deterministically in a population of infinite size, we address the following questions: how much can modeling the population size history improve estimates of selection coefficients? How much can mis-inferred population sizes hurt inferences of selection coefficients? We conduct our analysis under the discrete Wright-Fisher model by deriving the exact probability of an allele frequency trajectory in a population of time-varying size and we replicate our results under the diffusion model. For both models, we find that ignoring drift leads to estimates of selection coefficients that are nearly as accurate as estimates that account for the true population history, even when population sizes are small and drift is high. This result is of interest because inference methods that ignore drift are widely used in evolutionary studies and can be many orders of magnitude faster than methods that account for population sizes. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Lee, Peter N; Fry, John S; Thornton, Alison J
2014-02-01
We attempted to quantify the decline in stroke risk following quitting using the negative exponential model, with methodology previously employed for IHD. We identified 22 blocks of RRs (from 13 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We tried to estimate the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block. The method failed to converge or produced very variable estimates of H in nine blocks with a current smoker RR <1.40. Rejecting these, and combining blocks by amount smoked in one study where problems arose in model-fitting, the final analyses used 11 blocks. Goodness-of-fit was adequate for each block, the combined estimate of H being 4.78(95%CI 2.17-10.50) years. However, considerable heterogeneity existed, unexplained by any factor studied, with the random-effects estimate 3.08(1.32-7.16). Sensitivity analyses allowing for reverse causation or differing assumed times for the final quitting period gave similar results. The estimates of H are similar for stroke and IHD, and the individual estimates similarly heterogeneous. Fitting the model is harder for stroke, due to its weaker association with smoking. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Baker, Kirk R.; Hawkins, Andy; Kelly, James T.
2014-12-01
Near source modeling is needed to assess primary and secondary pollutant impacts from single sources and single source complexes. Source-receptor relationships need to be resolved from tens of meters to tens of kilometers. Dispersion models are typically applied for near-source primary pollutant impacts but lack complex photochemistry. Photochemical models provide a realistic chemical environment but are typically applied using grid cell sizes that may be larger than the distance between sources and receptors. It is important to understand the impacts of grid resolution and sub-grid plume treatments on photochemical modeling of near-source primary pollution gradients. Here, the CAMx photochemical grid model is applied using multiple grid resolutions and sub-grid plume treatment for SO2 and compared with a receptor mesonet largely impacted by nearby sources approximately 3-17 km away in a complex terrain environment. Measurements are compared with model estimates of SO2 at 4- and 1-km resolution, both with and without sub-grid plume treatment and inclusion of finer two-way grid nests. Annual average estimated SO2 mixing ratios are highest nearest the sources and decrease as distance from the sources increase. In general, CAMx estimates of SO2 do not compare well with the near-source observations when paired in space and time. Given the proximity of these sources and receptors, accuracy in wind vector estimation is critical for applications that pair pollutant predictions and observations in time and space. In typical permit applications, predictions and observations are not paired in time and space and the entire distributions of each are directly compared. Using this approach, model estimates using 1-km grid resolution best match the distribution of observations and are most comparable to similar studies that used dispersion and Lagrangian modeling systems. Model-estimated SO2 increases as grid cell size decreases from 4 km to 250 m. However, it is notable that the 1-km model estimates using 1-km meteorological model input are higher than the 1-km model simulation that used interpolated 4-km meteorology. The inclusion of sub-grid plume treatment did not improve model skill in predicting SO2 in time and space and generally acts to keep emitted mass aloft.
Pecha, Petr; Šmídl, Václav
2016-11-01
A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.
Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions
NASA Astrophysics Data System (ADS)
Yang, X.
2015-12-01
We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.
Sampling through time and phylodynamic inference with coalescent and birth–death models
Volz, Erik M.; Frost, Simon D. W.
2014-01-01
Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth–death-sampling model (BDM), in the context of estimating population size and birth rates in a population growing exponentially according to the birth–death branching process. For sequences sampled at a single time, we found the coalescent and the BDM gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth–death model estimators are subject to large bias if the sampling process is misspecified. Since BDMs incorporate a model of the sampling process, we show how much of the statistical power of BDMs arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information. PMID:25401173
A new estimator method for GARCH models
NASA Astrophysics Data System (ADS)
Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.
2007-06-01
The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.
Real-time hydraulic interval state estimation for water transport networks: a case study
NASA Astrophysics Data System (ADS)
Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.
2018-03-01
Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-09-03
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.
Contour-based object orientation estimation
NASA Astrophysics Data System (ADS)
Alpatov, Boris; Babayan, Pavel
2016-04-01
Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.
Miller, Justin B; Axelrod, Bradley N; Schutte, Christian
2012-01-01
The recent release of the Wechsler Memory Scale Fourth Edition contains many improvements from a theoretical and administration perspective, including demographic corrections using the Advanced Clinical Solutions. Although the administration time has been reduced from previous versions, a shortened version may be desirable in certain situations given practical time limitations in clinical practice. The current study evaluated two- and three-subtest estimations of demographically corrected Immediate and Delayed Memory index scores using both simple arithmetic prorating and regression models. All estimated values were significantly associated with observed index scores. Use of Lin's Concordance Correlation Coefficient as a measure of agreement showed a high degree of precision and virtually zero bias in the models, although the regression models showed a stronger association than prorated models. Regression-based models proved to be more accurate than prorated estimates with less dispersion around observed values, particularly when using three subtest regression models. Overall, the present research shows strong support for estimating demographically corrected index scores on the WMS-IV in clinical practice with an adequate performance using arithmetically prorated models and a stronger performance using regression models to predict index scores.
Parameter Estimation for a Model of Space-Time Rainfall
NASA Astrophysics Data System (ADS)
Smith, James A.; Karr, Alan F.
1985-08-01
In this paper, parameter estimation procedures, based on data from a network of rainfall gages, are developed for a class of space-time rainfall models. The models, which are designed to represent the spatial distribution of daily rainfall, have three components, one that governs the temporal occurrence of storms, a second that distributes rain cells spatially for a given storm, and a third that determines the rainfall pattern within a rain cell. Maximum likelihood and method of moments procedures are developed. We illustrate that limitations on model structure are imposed by restricting data sources to rain gage networks. The estimation procedures are applied to a 240-mi2 (621 km2) catchment in the Potomac River basin.
Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.
Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290
Structural nested mean models for assessing time-varying effect moderation.
Almirall, Daniel; Ten Have, Thomas; Murphy, Susan A
2010-03-01
This article considers the problem of assessing causal effect moderation in longitudinal settings in which treatment (or exposure) is time varying and so are the covariates said to moderate its effect. Intermediate causal effects that describe time-varying causal effects of treatment conditional on past covariate history are introduced and considered as part of Robins' structural nested mean model. Two estimators of the intermediate causal effects, and their standard errors, are presented and discussed: The first is a proposed two-stage regression estimator. The second is Robins' G-estimator. The results of a small simulation study that begins to shed light on the small versus large sample performance of the estimators, and on the bias-variance trade-off between the two estimators are presented. The methodology is illustrated using longitudinal data from a depression study.
Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick
2018-01-01
When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
DEKF system for crowding estimation by a multiple-model approach
NASA Astrophysics Data System (ADS)
Cravino, F.; Dellucca, M.; Tesei, A.
1994-03-01
A distributed extended Kalman filter (DEKF) network devoted to real-time crowding estimation for surveillance in complex scenes is presented. Estimation is carried out by extracting a set of significant features from sequences of images. Feature values are associated by virtual sensors with the estimated number of people using nonlinear models obtained in an off-line training phase. Different models are used, depending on the positions and dimensions of the crowded subareas detected in each image.
Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty
A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...
Time Series Model Identification by Estimating Information, Memory, and Quantiles.
1983-07-01
Standards, Sect. D, 68D, 937-951. Parzen, Emanuel (1969) "Multiple time series modeling" Multivariate Analysis - II, edited by P. Krishnaiah , Academic... Krishnaiah , North Holland: Amsterdam, 283-295. Parzen, Emanuel (1979) "Forecasting and Whitening Filter Estimation" TIMS Studies in the Management...principle. Applications of Statistics, P. R. Krishnaiah , ed. North Holland: Amsterdam, 27-41. Box, G. E. P. and Jenkins, G. M. (1970) Time Series Analysis
Modeling in Real Time During the Ebola Response.
Meltzer, Martin I; Santibanez, Scott; Fischer, Leah S; Merlin, Toby L; Adhikari, Bishwa B; Atkins, Charisma Y; Campbell, Caresse; Fung, Isaac Chun-Hai; Gambhir, Manoj; Gift, Thomas; Greening, Bradford; Gu, Weidong; Jacobson, Evin U; Kahn, Emily B; Carias, Cristina; Nerlander, Lina; Rainisch, Gabriel; Shankar, Manjunath; Wong, Karen; Washington, Michael L
2016-07-08
To aid decision-making during CDC's response to the 2014-2016 Ebola virus disease (Ebola) epidemic in West Africa, CDC activated a Modeling Task Force to generate estimates on various topics related to the response in West Africa and the risk for importation of cases into the United States. Analysis of eight Ebola response modeling projects conducted during August 2014-July 2015 provided insight into the types of questions addressed by modeling, the impact of the estimates generated, and the difficulties encountered during the modeling. This time frame was selected to cover the three phases of the West African epidemic curve. Questions posed to the Modeling Task Force changed as the epidemic progressed. Initially, the task force was asked to estimate the number of cases that might occur if no interventions were implemented compared with cases that might occur if interventions were implemented; however, at the peak of the epidemic, the focus shifted to estimating resource needs for Ebola treatment units. Then, as the epidemic decelerated, requests for modeling changed to generating estimates of the potential number of sexually transmitted Ebola cases. Modeling to provide information for decision-making during the CDC Ebola response involved limited data, a short turnaround time, and difficulty communicating the modeling process, including assumptions and interpretation of results. Despite these challenges, modeling yielded estimates and projections that public health officials used to make key decisions regarding response strategy and resources required. The impact of modeling during the Ebola response demonstrates the usefulness of modeling in future responses, particularly in the early stages and when data are scarce. Future modeling can be enhanced by planning ahead for data needs and data sharing, and by open communication among modelers, scientists, and others to ensure that modeling and its limitations are more clearly understood. The activities summarized in this report would not have been possible without collaboration with many U.S. and international partners (http://www.cdc.gov/vhf/ebola/outbreaks/2014-west-africa/partners.html).
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
Cognitive diagnosis modelling incorporating item response times.
Zhan, Peida; Jiao, Hong; Liao, Dandan
2018-05-01
To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.
Fujii, Keisuke; Shinya, Masahiro; Yamashita, Daichi; Kouzaki, Motoki; Oda, Shingo
2014-01-01
We previously estimated the timing when ball game defenders detect relevant information through visual input for reacting to an attacker's running direction after a cutting manoeuvre, called cue timing. The purpose of this study was to investigate what specific information is relevant for defenders, and how defenders process this information to decide on their opponents' running direction. In this study, we hypothesised that defenders extract information regarding the position and velocity of the attackers' centre of mass (CoM) and the contact foot. We used a model which simulates the future trajectory of the opponent's CoM based upon an inverted pendulum movement. The hypothesis was tested by comparing observed defender's cue timing, model-estimated cue timing using the inverted pendulum model (IPM cue timing) and cue timing using only the current CoM position (CoM cue timing). The IPM cue timing was defined as the time when the simulated pendulum falls leftward or rightward given the initial values for position and velocity of the CoM and the contact foot at the time. The model-estimated IPM cue timing and the empirically observed defender's cue timing were comparable in median value and were significantly correlated, whereas the CoM cue timing was significantly more delayed than the IPM and the defender's cue timings. Based on these results, we discuss the possibility that defenders may be able to anticipate the future direction of an attacker by forwardly simulating inverted pendulum movement.
Postmortem time estimation using body temperature and a finite-element computer model.
den Hartog, Emiel A; Lotens, Wouter A
2004-09-01
In the Netherlands most murder victims are found 2-24 h after the crime. During this period, body temperature decrease is the most reliable method to estimate the postmortem time (PMT). Recently, two murder cases were analysed in which currently available methods did not provide a sufficiently reliable estimate of the PMT. In both cases a study was performed to verify the statements of suspects. For this purpose a finite-element computer model was developed that simulates a human torso and its clothing. With this model, changes to the body and the environment can also be modelled; this was very relevant in one of the cases, as the body had been in the presence of a small fire. In both cases it was possible to falsify the statements of the suspects by improving the accuracy of the PMT estimate. The estimated PMT in both cases was within the range of Henssge's model. The standard deviation of the PMT estimate was 35 min in the first case and 45 min in the second case, compared to 168 min (2.8 h) in Henssge's model. In conclusion, the model as presented here can have additional value for improving the accuracy of the PMT estimate. In contrast to the simple model of Henssge, the current model allows for increased accuracy when more detailed information is available. Moreover, the sensitivity of the predicted PMT for uncertainty in the circumstances can be studied, which is crucial to the confidence of the judge in the results.
Liang, Hua; Miao, Hongyu; Wu, Hulin
2010-03-01
Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.
Bastardie, Francois
2014-01-01
Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP) statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes zero observations and over-dispersion. The model utilises the fact the correlation between numbers of fish caught increases when the distance in space and time between the fish decreases, and the correlation between size groups in a haul increases when the difference in size decreases. Here the model is extended in two ways. Instead of assuming a natural scale size correlation, the model is further developed to allow for a transformed length scale. Furthermore, in the present application, the spatial- and size-dependent correlation between species was included. For cod (Gadus morhua) and whiting (Merlangius merlangus), a common structured size correlation was fitted, and a separable structure between the time and space-size correlation was found for each species, whereas more complex structures were required to describe the correlation between species (and space-size). The within-species time correlation is strong, whereas the correlations between the species are weaker over time but strong within the year. PMID:24911631
Application of Consider Covariance to the Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Lundberg, John B.
1996-01-01
The extended Kalman filter (EKF) is the basis for many applications of filtering theory to real-time problems where estimates of the state of a dynamical system are to be computed based upon some set of observations. The form of the EKF may vary somewhat from one application to another, but the fundamental principles are typically unchanged among these various applications. As is the case in many filtering applications, models of the dynamical system (differential equations describing the state variables) and models of the relationship between the observations and the state variables are created. These models typically employ a set of constants whose values are established my means of theory or experimental procedure. Since the estimates of the state are formed assuming that the models are perfect, any modeling errors will affect the accuracy of the computed estimates. Note that the modeling errors may be errors of commission (errors in terms included in the model) or omission (errors in terms excluded from the model). Consequently, it becomes imperative when evaluating the performance of real-time filters to evaluate the effect of modeling errors on the estimates of the state.
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.
A framework for scalable parameter estimation of gene circuit models using structural information.
Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin
2013-07-01
Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang
2018-01-01
Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm for BeiDou GEO satellites. The real-time positioning results prove that the GPS + BeiDou + Galileo RT-PPP comparing to GPS-only can effectively accelerate convergence time by about 60%, improve the positioning accuracy by about 30% and obtain averaged RMS 4 cm in horizontal and 6 cm in vertical; additionally RT-SPP accuracy in the prototype system can realize positioning accuracy with about averaged RMS 1 m in horizontal and 1.5-2 m in vertical, which are improved by 60% and 70% to SPP based on broadcast ephemeris, respectively.
Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand
2015-09-25
Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.
Landy, Rebecca; Cheung, Li C; Schiffman, Mark; Gage, Julia C; Hyun, Noorie; Wentzensen, Nicolas; Kinney, Walter K; Castle, Philip E; Fetterman, Barbara; Poitras, Nancy E; Lorey, Thomas; Sasieni, Peter D; Katki, Hormuzd A
2018-06-01
Electronic health-records (EHR) are increasingly used by epidemiologists studying disease following surveillance testing to provide evidence for screening intervals and referral guidelines. Although cost-effective, undiagnosed prevalent disease and interval censoring (in which asymptomatic disease is only observed at the time of testing) raise substantial analytic issues when estimating risk that cannot be addressed using Kaplan-Meier methods. Based on our experience analysing EHR from cervical cancer screening, we previously proposed the logistic-Weibull model to address these issues. Here we demonstrate how the choice of statistical method can impact risk estimates. We use observed data on 41,067 women in the cervical cancer screening program at Kaiser Permanente Northern California, 2003-2013, as well as simulations to evaluate the ability of different methods (Kaplan-Meier, Turnbull, Weibull and logistic-Weibull) to accurately estimate risk within a screening program. Cumulative risk estimates from the statistical methods varied considerably, with the largest differences occurring for prevalent disease risk when baseline disease ascertainment was random but incomplete. Kaplan-Meier underestimated risk at earlier times and overestimated risk at later times in the presence of interval censoring or undiagnosed prevalent disease. Turnbull performed well, though was inefficient and not smooth. The logistic-Weibull model performed well, except when event times didn't follow a Weibull distribution. We have demonstrated that methods for right-censored data, such as Kaplan-Meier, result in biased estimates of disease risks when applied to interval-censored data, such as screening programs using EHR data. The logistic-Weibull model is attractive, but the model fit must be checked against Turnbull non-parametric risk estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Monitoring and Modeling Performance of Communications in Computational Grids
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Le, Thuy T.
2003-01-01
Computational grids may include many machines located in a number of sites. For efficient use of the grid we need to have an ability to estimate the time it takes to communicate data between the machines. For dynamic distributed grids it is unrealistic to know exact parameters of the communication hardware and the current communication traffic and we should rely on a model of the network performance to estimate the message delivery time. Our approach to a construction of such a model is based on observation of the messages delivery time with various message sizes and time scales. We record these observations in a database and use them to build a model of the message delivery time. Our experiments show presence of multiple bands in the logarithm of the message delivery times. These multiple bands represent multiple paths messages travel between the grid machines and are incorporated in our multiband model.
Estimating Surgical Procedure Times Using Anesthesia Billing Data and Operating Room Records.
Burgette, Lane F; Mulcahy, Andrew W; Mehrotra, Ateev; Ruder, Teague; Wynn, Barbara O
2017-02-01
The median time required to perform a surgical procedure is important in determining payment under Medicare's physician fee schedule. Prior studies have demonstrated that the current methodology of using physician surveys to determine surgical times results in overstated times. To measure surgical times more accurately, we developed and validated a methodology using available data from anesthesia billing data and operating room (OR) records. We estimated surgical times using Medicare 2011 anesthesia claims and New York Statewide Planning and Research Cooperative System 2011 OR times. Estimated times were validated using data from the National Surgical Quality Improvement Program. We compared our time estimates to those used by Medicare in the fee schedule. We estimate surgical times via piecewise linear median regression models. Using 3.0 million observations of anesthesia and OR times, we estimated surgical time for 921 procedures. Correlation between these time estimates and directly measured surgical time from the validation database was 0.98. Our estimates of surgical time were shorter than the Medicare fee schedule estimates for 78 percent of procedures. Anesthesia and OR times can be used to measure surgical time and thereby improve the payment for surgical procedures in the Medicare fee schedule. © Health Research and Educational Trust.
The effects of resonances on time delay estimation for water leak detection in plastic pipes
NASA Astrophysics Data System (ADS)
Almeida, Fabrício C. L.; Brennan, Michael J.; Joseph, Phillip F.; Gao, Yan; Paschoalini, Amarildo T.
2018-04-01
In the use of acoustic correlation methods for water leak detection, sensors are placed at pipe access points either side of a suspected leak, and the peak in the cross-correlation function of the measured signals gives the time difference (delay) between the arrival times of the leak noise at the sensors. Combining this information with the speed at which the leak noise propagates along the pipe, gives an estimate for the location of the leak with respect to one of the measurement positions. It is possible for the structural dynamics of the pipe system to corrupt the time delay estimate, which results in the leak being incorrectly located. In this paper, data from test-rigs in the United Kingdom and Canada are used to demonstrate this phenomenon, and analytical models of resonators are coupled with a pipe model to replicate the experimental results. The model is then used to investigate which of the two commonly used correlation algorithms, the Basic Cross-Correlation (BCC) function or the Phase Transform (PHAT), is more robust to the undesirable structural dynamics of the pipe system. It is found that time delay estimation is highly sensitive to the frequency bandwidth over which the analysis is conducted. Moreover, it is found that the PHAT is particularly sensitive to the presence of resonances and can give an incorrect time delay estimate, whereas the BCC function is found to be much more robust, giving a consistently accurate time delay estimate for a range of dynamic conditions.
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
NASA Astrophysics Data System (ADS)
Yang, Shuangming; Deng, Bin; Wang, Jiang; Li, Huiyan; Liu, Chen; Fietkiewicz, Chris; Loparo, Kenneth A.
2017-01-01
Real-time estimation of dynamical characteristics of thalamocortical cells, such as dynamics of ion channels and membrane potentials, is useful and essential in the study of the thalamus in Parkinsonian state. However, measuring the dynamical properties of ion channels is extremely challenging experimentally and even impossible in clinical applications. This paper presents and evaluates a real-time estimation system for thalamocortical hidden properties. For the sake of efficiency, we use a field programmable gate array for strictly hardware-based computation and algorithm optimization. In the proposed system, the FPGA-based unscented Kalman filter is implemented into a conductance-based TC neuron model. Since the complexity of TC neuron model restrains its hardware implementation in parallel structure, a cost efficient model is proposed to reduce the resource cost while retaining the relevant ionic dynamics. Experimental results demonstrate the real-time capability to estimate thalamocortical hidden properties with high precision under both normal and Parkinsonian states. While it is applied to estimate the hidden properties of the thalamus and explore the mechanism of the Parkinsonian state, the proposed method can be useful in the dynamic clamp technique of the electrophysiological experiments, the neural control engineering and brain-machine interface studies.
Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun
2017-01-01
Objectives To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose–response effect) for data from a stepped-wedge design with a hierarchical structure. Design The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Setting Routinely and annually collected national data on China from 2008 to 2012. Participants 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Outcome measures Agreement and differences in estimates of dose–response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). Results The estimated dose–response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2–4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose–response among provinces, counties and facilities were estimated, and the ‘lowest’ or ‘highest’ units by their dose–response effects were pinpointed only by the multilevel RM model. Conclusions For examining dose–response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. PMID:28399510
Numerical modeling of space-time wave extremes using WAVEWATCH III
NASA Astrophysics Data System (ADS)
Barbariol, Francesco; Alves, Jose-Henrique G. M.; Benetazzo, Alvise; Bergamasco, Filippo; Bertotti, Luciana; Carniel, Sandro; Cavaleri, Luigi; Y. Chao, Yung; Chawla, Arun; Ricchi, Antonio; Sclavo, Mauro; Tolman, Hendrik
2017-04-01
A novel implementation of parameters estimating the space-time wave extremes within the spectral wave model WAVEWATCH III (WW3) is presented. The new output parameters, available in WW3 version 5.16, rely on the theoretical model of Fedele (J Phys Oceanogr 42(9):1601-1615, 2012) extended by Benetazzo et al. (J Phys Oceanogr 45(9):2261-2275, 2015) to estimate the maximum second-order nonlinear crest height over a given space-time region. In order to assess the wave height associated to the maximum crest height and the maximum wave height (generally different in a broad-band stormy sea state), the linear quasi-determinism theory of Boccotti (2000) is considered. The new WW3 implementation is tested by simulating sea states and space-time extremes over the Mediterranean Sea (forced by the wind fields produced by the COSMO-ME atmospheric model). Model simulations are compared to space-time wave maxima observed on March 10th, 2014, in the northern Adriatic Sea (Italy), by a stereo camera system installed on-board the "Acqua Alta" oceanographic tower. Results show that modeled space-time extremes are in general agreement with observations. Differences are mostly ascribed to the accuracy of the wind forcing and, to a lesser extent, to the approximations introduced in the space-time extremes parameterizations. Model estimates are expected to be even more accurate over areas larger than the mean wavelength (for instance, the model grid size).
Validation of travel times to hospital estimated by GIS.
Haynes, Robin; Jones, Andrew P; Sauerzapf, Violet; Zhao, Hongxin
2006-09-19
An increasing number of studies use GIS estimates of car travel times to health services, without presenting any evidence that the estimates are representative of real travel times. This investigation compared GIS estimates of travel times with the actual times reported by a sample of 475 cancer patients who had travelled by car to attend clinics at eight hospitals in the North of England. Car travel times were estimated by GIS using the shortest road route between home address and hospital and average speed assumptions. These estimates were compared with reported journey times and straight line distances using graphical, correlation and regression techniques. There was a moderately strong association between reported times and estimated travel times (r = 0.856). Reported travel times were similarly related to straight line distances. Altogether, 50% of travel time estimates were within five minutes of the time reported by respondents, 77% were within ten minutes and 90% were within fifteen minutes. The distribution of over- and under-estimates was symmetrical, but estimated times tended to be longer than reported times with increasing distance from hospital. Almost all respondents rounded their travel time to the nearest five or ten minutes. The reason for many cases of reported journey times exceeding the estimated times was confirmed by respondents' comments as traffic congestion. GIS estimates of car travel times were moderately close approximations to reported times. GIS travel time estimates may be superior to reported travel times for modelling purposes because reported times contain errors and can reflect unusual circumstances. Comparison with reported times did not suggest that estimated times were a more sensitive measure than straight line distance.
NASA Astrophysics Data System (ADS)
Leroux, Romain; Chatellier, Ludovic; David, Laurent
2018-01-01
This article is devoted to the estimation of time-resolved particle image velocimetry (TR-PIV) flow fields using a time-resolved point measurements of a voltage signal obtained by hot-film anemometry. A multiple linear regression model is first defined to map the TR-PIV flow fields onto the voltage signal. Due to the high temporal resolution of the signal acquired by the hot-film sensor, the estimates of the TR-PIV flow fields are obtained with a multiple linear regression method called orthonormalized partial least squares regression (OPLSR). Subsequently, this model is incorporated as the observation equation in an ensemble Kalman filter (EnKF) applied on a proper orthogonal decomposition reduced-order model to stabilize it while reducing the effects of the hot-film sensor noise. This method is assessed for the reconstruction of the flow around a NACA0012 airfoil at a Reynolds number of 1000 and an angle of attack of {20}°. Comparisons with multi-time delay-modified linear stochastic estimation show that both the OPLSR and EnKF combined with OPLSR are more accurate as they produce a much lower relative estimation error, and provide a faithful reconstruction of the time evolution of the velocity flow fields.
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
SiGN-SSM: open source parallel software for estimating gene networks with state space models.
Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru
2011-04-15
SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.
Modelling survival: exposure pattern, species sensitivity and uncertainty.
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G
2016-07-06
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.
Modelling survival: exposure pattern, species sensitivity and uncertainty
NASA Astrophysics Data System (ADS)
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.
2016-07-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.
Application of troposphere model from NWP and GNSS data into real-time precise positioning
NASA Astrophysics Data System (ADS)
Wilgan, Karina; Hadas, Tomasz; Kazmierski, Kamil; Rohm, Witold; Bosy, Jaroslaw
2016-04-01
The tropospheric delay empirical models are usually functions of meteorological parameters (temperature, pressure and humidity). The application of standard atmosphere parameters or global models, such as GPT (global pressure/temperature) model or UNB3 (University of New Brunswick, version 3) model, may not be sufficient, especially for positioning in non-standard weather conditions. The possible solution is to use regional troposphere models based on real-time or near-real time measurements. We implement a regional troposphere model into the PPP (Precise Point Positioning) software GNSS-WARP (Wroclaw Algorithms for Real-time Positioning) developed at Wroclaw University of Environmental and Life Sciences. The software is capable of processing static and kinematic multi-GNSS data in real-time and post-processing mode and takes advantage of final IGS (International GNSS Service) products as well as IGS RTS (Real-Time Service) products. A shortcoming of PPP technique is the time required for the solution to converge. One of the reasons is the high correlation among the estimated parameters: troposphere delay, receiver clock offset and receiver height. To efficiently decorrelate these parameters, a significant change in satellite geometry is required. Alternative solution is to introduce the external high-quality regional troposphere delay model to constrain troposphere estimates. The proposed model consists of zenith total delays (ZTD) and mapping functions calculated from meteorological parameters from Numerical Weather Prediction model WRF (Weather Research and Forecasting) and ZTDs from ground-based GNSS stations using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zurich.
Green, Christopher T.; Jurgens, Bryant; Zhang, Yong; Starn, Jeffrey; Singleton, Michael J.; Esser, Bradley K.
2016-01-01
Rates of oxygen and nitrate reduction are key factors in determining the chemical evolution of groundwater. Little is known about how these rates vary and covary in regional groundwater settings, as few studies have focused on regional datasets with multiple tracers and methods of analysis that account for effects of mixed residence times on apparent reaction rates. This study provides insight into the characteristics of residence times and rates of O2 reduction and denitrification (NO3− reduction) by comparing reaction rates using multi-model analytical residence time distributions (RTDs) applied to a data set of atmospheric tracers of groundwater age and geochemical data from 141 well samples in the Central Eastern San Joaquin Valley, CA. The RTD approach accounts for mixtures of residence times in a single sample to provide estimates of in-situ rates. Tracers included SF6, CFCs, 3H, He from 3H (tritiogenic He),14C, and terrigenic He. Parameter estimation and multi-model averaging were used to establish RTDs with lower error variances than those produced by individual RTD models. The set of multi-model RTDs was used in combination with NO3− and dissolved gas data to estimate zero order and first order rates of O2 reduction and denitrification. Results indicated that O2 reduction and denitrification rates followed approximately log-normal distributions. Rates of O2 and NO3− reduction were correlated and, on an electron milliequivalent basis, denitrification rates tended to exceed O2 reduction rates. Estimated historical NO3− trends were similar to historical measurements. Results show that the multi-model approach can improve estimation of age distributions, and that relatively easily measured O2 rates can provide information about trends in denitrification rates, which are more difficult to estimate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Christopher T.; Jurgens, Bryant C.; Zhang, Yong
Rates of oxygen and nitrate reduction are key factors in determining the chemical evolution of groundwater. Little is known about how these rates vary and covary in regional groundwater settings, as few studies have focused on regional datasets with multiple tracers and methods of analysis that account for effects of mixed residence times on apparent reaction rates. This study provides insight into the characteristics of residence times and rates of O 2 reduction and denitrification (NO 3 – reduction) by comparing reaction rates using multi-model analytical residence time distributions (RTDs) applied to a data set of atmospheric tracers of groundwatermore » age and geochemical data from 141 well samples in the Central Eastern San Joaquin Valley, CA. The RTD approach accounts for mixtures of residence times in a single sample to provide estimates of in-situ rates. Tracers included SF 6, CFCs, 3H, He from 3H (tritiogenic He), 14C, and terrigenic He. Parameter estimation and multi-model averaging were used to establish RTDs with lower error variances than those produced by individual RTD models. The set of multi-model RTDs was used in combination with NO 3 – and dissolved gas data to estimate zero order and first order rates of O 2 reduction and denitrification. Results indicated that O 2 reduction and denitrification rates followed approximately log-normal distributions. Rates of O 2 and NO 3 – reduction were correlated and, on an electron milliequivalent basis, denitrification rates tended to exceed O 2 reduction rates. Estimated historical NO 3 – trends were similar to historical measurements. Here, results show that the multi-model approach can improve estimation of age distributions, and that relatively easily measured O 2 rates can provide information about trends in denitrification rates, which are more difficult to estimate.« less
Green, Christopher T.; Jurgens, Bryant C.; Zhang, Yong; ...
2016-05-14
Rates of oxygen and nitrate reduction are key factors in determining the chemical evolution of groundwater. Little is known about how these rates vary and covary in regional groundwater settings, as few studies have focused on regional datasets with multiple tracers and methods of analysis that account for effects of mixed residence times on apparent reaction rates. This study provides insight into the characteristics of residence times and rates of O 2 reduction and denitrification (NO 3 – reduction) by comparing reaction rates using multi-model analytical residence time distributions (RTDs) applied to a data set of atmospheric tracers of groundwatermore » age and geochemical data from 141 well samples in the Central Eastern San Joaquin Valley, CA. The RTD approach accounts for mixtures of residence times in a single sample to provide estimates of in-situ rates. Tracers included SF 6, CFCs, 3H, He from 3H (tritiogenic He), 14C, and terrigenic He. Parameter estimation and multi-model averaging were used to establish RTDs with lower error variances than those produced by individual RTD models. The set of multi-model RTDs was used in combination with NO 3 – and dissolved gas data to estimate zero order and first order rates of O 2 reduction and denitrification. Results indicated that O 2 reduction and denitrification rates followed approximately log-normal distributions. Rates of O 2 and NO 3 – reduction were correlated and, on an electron milliequivalent basis, denitrification rates tended to exceed O 2 reduction rates. Estimated historical NO 3 – trends were similar to historical measurements. Here, results show that the multi-model approach can improve estimation of age distributions, and that relatively easily measured O 2 rates can provide information about trends in denitrification rates, which are more difficult to estimate.« less
ERIC Educational Resources Information Center
Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.
2009-01-01
Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…
Estimating Dynamical Systems: Derivative Estimation Hints from Sir Ronald A. Fisher
ERIC Educational Resources Information Center
Deboeck, Pascal R.
2010-01-01
The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common…
Continuous-time interval model identification of blood glucose dynamics for type 1 diabetes
NASA Astrophysics Data System (ADS)
Kirchsteiger, Harald; Johansson, Rolf; Renard, Eric; del Re, Luigi
2014-07-01
While good physiological models of the glucose metabolism in type 1 diabetic patients are well known, their parameterisation is difficult. The high intra-patient variability observed is a further major obstacle. This holds for data-based models too, so that no good patient-specific models are available. Against this background, this paper proposes the use of interval models to cover the different metabolic conditions. The control-oriented models contain a carbohydrate and insulin sensitivity factor to be used for insulin bolus calculators directly. Available clinical measurements were sampled on an irregular schedule which prompts the use of continuous-time identification, also for the direct estimation of the clinically interpretable factors mentioned above. An identification method is derived and applied to real data from 28 diabetic patients. Model estimation was done on a clinical data-set, whereas validation results shown were done on an out-of-clinic, everyday life data-set. The results show that the interval model approach allows a much more regular estimation of the parameters and avoids physiologically incompatible parameter estimates.
Statistical tools for transgene copy number estimation based on real-time PCR.
Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal
2007-11-01
As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.
Alteration of Box-Jenkins methodology by implementing genetic algorithm method
NASA Astrophysics Data System (ADS)
Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad
2015-02-01
A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.
NASA Astrophysics Data System (ADS)
Abdelazeem, Mohamed; Çelik, Rahmi N.; El-Rabbany, Ahmed
2016-04-01
The international global navigation satellite system (GNSS) real-time service (IGS-RTS) products have been used extensively for real-time precise point positioning and ionosphere modeling applications. In this study, we develop a regional model for real-time vertical total electron content (RT-VTEC) and differential code bias (RT-DCB) estimation over Europe using the IGS-RTS satellite orbit and clock products. The developed model has a spatial and temporal resolution of 1°×1° and 15 minutes, respectively. GPS observations from a regional network consisting of 60 IGS and EUREF reference stations are processed in the zero-difference mode using the Bernese-5.2 software package in order to extract the geometry-free linear combination of the smoothed code observations. The spherical harmonic expansion function is used to model the VTEC, the receiver and the satellite DCBs. To validate the proposed model, the RT-VTEC values are computed and compared with the final IGS-global ionospheric map (IGS-GIM) counterparts in three successive days under high solar activity including one of an extreme geomagnetic activity. The real-time satellite DCBs are also estimated and compared with the IGS-GIM counterparts. Moreover, the real-time receiver DCB for six IGS stations are obtained and compared with the IGS-GIM counterparts. The examined stations are located in different latitudes with different receiver types. The findings reveal that the estimated RT-VTEC values show agreement with the IGS-GIM counterparts with root mean-square-errors (RMSEs) values less than 2 TEC units. In addition, RMSEs of both the satellites and receivers DCBs are less than 0.85 ns and 0.65 ns, respectively in comparison with the IGS-GIM.
Estimating the waiting time of multi-priority emergency patients with downstream blocking.
Lin, Di; Patrick, Jonathan; Labeau, Fabrice
2014-03-01
To characterize the coupling effect between patient flow to access the emergency department (ED) and that to access the inpatient unit (IU), we develop a model with two connected queues: one upstream queue for the patient flow to access the ED and one downstream queue for the patient flow to access the IU. Building on this patient flow model, we employ queueing theory to estimate the average waiting time across patients. Using priority specific wait time targets, we further estimate the necessary number of ED and IU resources. Finally, we investigate how an alternative way of accessing ED (Fast Track) impacts the average waiting time of patients as well as the necessary number of ED/IU resources. This model as well as the analysis on patient flow can help the designer or manager of a hospital make decisions on the allocation of ED/IU resources in a hospital.
NASA Astrophysics Data System (ADS)
Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky
2018-03-01
The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.
Novel applications of the temporal kernel method: Historical and future radiative forcing
NASA Astrophysics Data System (ADS)
Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.
2017-12-01
We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.
NASA Technical Reports Server (NTRS)
Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue
1993-01-01
A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.
Getting It Right Matters: Climate Spectra and Their Estimation
NASA Astrophysics Data System (ADS)
Privalsky, Victor; Yushkov, Vladislav
2018-06-01
In many recent publications, climate spectra estimated with different methods from observed, GCM-simulated, and reconstructed time series contain many peaks at time scales from a few years to many decades and even centuries. However, respective spectral estimates obtained with the autoregressive (AR) and multitapering (MTM) methods showed that spectra of climate time series are smooth and contain no evidence of periodic or quasi-periodic behavior. Four order selection criteria for the autoregressive models were studied and proven sufficiently reliable for 25 time series of climate observations at individual locations or spatially averaged at local-to-global scales. As time series of climate observations are short, an alternative reliable nonparametric approach is Thomson's MTM. These results agree with both the earlier climate spectral analyses and the Markovian stochastic model of climate.
Estimates of Zenith Total Delay trends from GPS reprocessing with autoregressive process
NASA Astrophysics Data System (ADS)
Klos, Anna; Hunegnaw, Addisu; Teferle, Felix Norman; Ebuy Abraha, Kibrom; Ahmed, Furqan; Bogusz, Janusz
2017-04-01
Nowadays, near real-time Zenith Total Delay (ZTD) estimates from Global Positioning System (GPS) observations are routinely assimilated into numerical weather prediction (NWP) models to improve the reliability of forecasts. On the other hand, ZTD time series derived from homogeneously re-processed GPS observations over long periods have the potential to improve our understanding of climate change on various temporal and spatial scales. With such time series only recently reaching somewhat adequate time spans, the application of GPS-derived ZTD estimates to climate monitoring is still to be developed further. In this research, we examine the character of noise in ZTD time series for 1995-2015 in order to estimate more realistic magnitudes of trend and its uncertainty than would be the case if the stochastic properties are not taken into account. Furthermore, the hourly sampled, homogeneously re-processed and carefully homogenized ZTD time series from over 700 globally distributed stations were classified into five major climate zones. We found that the amplitudes of annual signals reach values of 10-150 mm with minimum values for the polar and Alpine zones. The amplitudes of daily signals were estimated to be 0-12 mm with maximum values found for the dry zone. We examined seven different noise models for the residual ZTD time series after modelling all known periodicities. This identified a combination of white plus autoregressive process of fourth order to be optimal to match all changes in power of the ZTD data. When the stochastic properties are neglected, ie. a pure white noise model is employed, only 11 from 120 trends were insignificant. Using the optimum noise model more than half of the 120 examined trends became insignificant. We show that the uncertainty of ZTD trends is underestimated by a factor of 3-12 when the stochastic properties of the ZTD time series are ignored and we conclude that it is essential to properly model the noise characteristics of such time series when interpretations in terms of climate change are to be performed.
Generalized semiparametric varying-coefficient models for longitudinal data
NASA Astrophysics Data System (ADS)
Qi, Li
In this dissertation, we investigate the generalized semiparametric varying-coefficient models for longitudinal data that can flexibly model three types of covariate effects: time-constant effects, time-varying effects, and covariate-varying effects, i.e., the covariate effects that depend on other possibly time-dependent exposure variables. First, we consider the model that assumes the time-varying effects are unspecified functions of time while the covariate-varying effects are parametric functions of an exposure variable specified up to a finite number of unknown parameters. The estimation procedures are developed using multivariate local linear smoothing and generalized weighted least squares estimation techniques. The asymptotic properties of the proposed estimators are established. The simulation studies show that the proposed methods have satisfactory finite sample performance. ACTG 244 clinical trial of HIV infected patients are applied to examine the effects of antiretroviral treatment switching before and after HIV developing the 215-mutation. Our analysis shows benefit of treatment switching before developing the 215-mutation. The proposed methods are also applied to the STEP study with MITT cases showing that they have broad applications in medical research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.
2016-11-15
A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less
Avian seasonal productivity is often modeled as a time-limited stochastic process. Many mathematical formulations have been proposed, including individual based models, continuous-time differential equations, and discrete Markov models. All such models typically include paramete...
van Tuinen, Marcel; Torres, Christopher R.
2015-01-01
Uncertainty in divergence time estimation is frequently studied from many angles but rarely from the perspective of phylogenetic node age. If appropriate molecular models and fossil priors are used, a multi-locus, partitioned analysis is expected to equally minimize error in accuracy and precision across all nodes of a given phylogeny. In contrast, if available models fail to completely account for rate heterogeneity, substitution saturation and incompleteness of the fossil record, uncertainty in divergence time estimation may increase with node age. While many studies have stressed this concern with regard to deep nodes in the Tree of Life, the inference that molecular divergence time estimation of shallow nodes is less sensitive to erroneous model choice has not been tested explicitly in a Bayesian framework. Because of available divergence time estimation methods that permit fossil priors across any phylogenetic node and the present increase in efficient, cheap collection of species-level genomic data, insight is needed into the performance of divergence time estimation of shallow (<10 MY) nodes. Here, we performed multiple sensitivity analyses in a multi-locus data set of aquatic birds with six fossil constraints. Comparison across divergence time analyses that varied taxon and locus sampling, number and position of fossil constraint and shape of prior distribution showed various insights. Deviation from node ages obtained from a reference analysis was generally highest for the shallowest nodes but determined more by temporal placement than number of fossil constraints. Calibration with only the shallowest nodes significantly underestimated the aquatic bird fossil record, indicating the presence of saturation. Although joint calibration with all six priors yielded ages most consistent with the fossil record, ages of shallow nodes were overestimated. This bias was found in both mtDNA and nDNA regions. Thus, divergence time estimation of shallow nodes may suffer from bias and low precision, even when appropriate fossil priors and best available substitution models are chosen. Much care must be taken to address the possible ramifications of substitution saturation across the entire Tree of Life. PMID:26106406
Wind and wave extremes over the world oceans from very large ensembles
NASA Astrophysics Data System (ADS)
Breivik, Øyvind; Aarnes, Ole Johan; Abdalla, Saleh; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.
2014-07-01
Global return values of marine wind speed and significant wave height are estimated from very large aggregates of archived ensemble forecasts at +240 h lead time. Long lead time ensures that the forecasts represent independent draws from the model climate. Compared with ERA-Interim, a reanalysis, the ensemble yields higher return estimates for both wind speed and significant wave height. Confidence intervals are much tighter due to the large size of the data set. The period (9 years) is short enough to be considered stationary even with climate change. Furthermore, the ensemble is large enough for nonparametric 100 year return estimates to be made from order statistics. These direct return estimates compare well with extreme value estimates outside areas with tropical cyclones. Like any method employing modeled fields, it is sensitive to tail biases in the numerical model, but we find that the biases are moderate outside areas with tropical cyclones.
Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine.
Howard, Jeremy T; Ashwell, Melissa S; Baynes, Ronald E; Brooks, James D; Yeatts, James L; Maltecca, Christian
2018-01-01
In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs ( n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug concentration across time resulted in estimates with a smaller standard error compared to models that utilized PK parameters. The current study found a low to moderate proportion of the phenotypic variation in metabolizing fenbendazole and flunixin meglumine that was explained by genetics in the current study.
Genetic Parameter Estimates for Metabolizing Two Common Pharmaceuticals in Swine
Howard, Jeremy T.; Ashwell, Melissa S.; Baynes, Ronald E.; Brooks, James D.; Yeatts, James L.; Maltecca, Christian
2018-01-01
In livestock, the regulation of drugs used to treat livestock has received increased attention and it is currently unknown how much of the phenotypic variation in drug metabolism is due to the genetics of an animal. Therefore, the objective of the study was to determine the amount of phenotypic variation in fenbendazole and flunixin meglumine drug metabolism due to genetics. The population consisted of crossbred female and castrated male nursery pigs (n = 198) that were sired by boars represented by four breeds. The animals were spread across nine batches. Drugs were administered intravenously and blood collected a minimum of 10 times over a 48 h period. Genetic parameters for the parent drug and metabolite concentration within each drug were estimated based on pharmacokinetics (PK) parameters or concentrations across time utilizing a random regression model. The PK parameters were estimated using a non-compartmental analysis. The PK model included fixed effects of sex and breed of sire along with random sire and batch effects. The random regression model utilized Legendre polynomials and included a fixed population concentration curve, sex, and breed of sire effects along with a random sire deviation from the population curve and batch effect. The sire effect included the intercept for all models except for the fenbendazole metabolite (i.e., intercept and slope). The mean heritability across PK parameters for the fenbendazole and flunixin meglumine parent drug (metabolite) was 0.15 (0.18) and 0.31 (0.40), respectively. For the parent drug (metabolite), the mean heritability across time was 0.27 (0.60) and 0.14 (0.44) for fenbendazole and flunixin meglumine, respectively. The errors surrounding the heritability estimates for the random regression model were smaller compared to estimates obtained from PK parameters. Across both the PK and plasma drug concentration across model, a moderate heritability was estimated. The model that utilized the plasma drug concentration across time resulted in estimates with a smaller standard error compared to models that utilized PK parameters. The current study found a low to moderate proportion of the phenotypic variation in metabolizing fenbendazole and flunixin meglumine that was explained by genetics in the current study. PMID:29487615
NASA Astrophysics Data System (ADS)
Itter, M.; Finley, A. O.; Hooten, M.; Higuera, P. E.; Marlon, J. R.; McLachlan, J. S.; Kelly, R.
2016-12-01
Sediment charcoal records are used in paleoecological analyses to identify individual local fire events and to estimate fire frequency and regional biomass burned at centennial to millenial time scales. Methods to identify local fire events based on sediment charcoal records have been well developed over the past 30 years, however, an integrated statistical framework for fire identification is still lacking. We build upon existing paleoecological methods to develop a hierarchical Bayesian point process model for local fire identification and estimation of fire return intervals. The model is unique in that it combines sediment charcoal records from multiple lakes across a region in a spatially-explicit fashion leading to estimation of a joint, regional fire return interval in addition to lake-specific local fire frequencies. Further, the model estimates a joint regional charcoal deposition rate free from the effects of local fires that can be used as a measure of regional biomass burned over time. Finally, the hierarchical Bayesian approach allows for tractable error propagation such that estimates of fire return intervals reflect the full range of uncertainty in sediment charcoal records. Specific sources of uncertainty addressed include sediment age models, the separation of local versus regional charcoal sources, and generation of a composite charcoal record The model is applied to sediment charcoal records from a dense network of lakes in the Yukon Flats region of Alaska. The multivariate joint modeling approach results in improved estimates of regional charcoal deposition with reduced uncertainty in the identification of individual fire events and local fire return intervals compared to individual lake approaches. Modeled individual-lake fire return intervals range from 100 to 500 years with a regional interval of roughly 200 years. Regional charcoal deposition to the network of lakes is correlated up to 50 kilometers. Finally, the joint regional charcoal deposition rate exhibits changes over time coincident with major climatic and vegetation shifts over the past 10,000 years. Ongoing work will use the regional charcoal deposition rate to estimate changes in biomass burned as a function of climate variability and regional vegetation pattern.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blais, AR; Dekaban, M; Lee, T-Y
2014-08-15
Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less
Penn, Richard; Werner, Michael; Thomas, Justin
2015-01-01
Background Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. Methods In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. Results We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Conclusions Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible. PMID:26029638
Conrads, Paul; Roehl, Edwin A.
2007-01-01
The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level gaging stations, ground-elevation models, and water-surface models designed to provide scientists, engineers, and water-resource managers with current (2000-present) water-depth information for the entire freshwater portion of the greater Everglades. The U.S. Geological Survey Greater Everglades Priority Ecosystem Science provides support for EDEN and the goal of providing quality assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. To increase the accuracy of the water-surface models, 25 real-time water-level gaging stations were added to the network of 253 established water-level gaging stations. To incorporate the data from the newly added stations to the 7-year EDEN database in the greater Everglades, the short-term water-level records (generally less than 1 year) needed to be simulated back in time (hindcasted) to be concurrent with data from the established gaging stations in the database. A three-step modeling approach using artificial neural network models was used to estimate the water levels at the new stations. The artificial neural network models used static variables that represent the gaging station location and percent vegetation in addition to dynamic variables that represent water-level data from the established EDEN gaging stations. The final step of the modeling approach was to simulate the computed error of the initial estimate to increase the accuracy of the final water-level estimate. The three-step modeling approach for estimating water levels at the new EDEN gaging stations produced satisfactory results. The coefficients of determination (R2) for 21 of the 25 estimates were greater than 0.95, and all of the estimates (25 of 25) were greater than 0.82. The model estimates showed good agreement with the measured data. For some new EDEN stations with limited measured data, the record extension (hindcasts) included periods beyond the range of the data used to train the artificial neural network models. The comparison of the hindcasts with long-term water-level data proximal to the new EDEN gaging stations indicated that the water-level estimates were reasonable. The percent model error (root mean square error divided by the range of the measured data) was less than 6 percent, and for the majority of stations (20 of 25), the percent model error was less than 1 percent.
Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi
2016-11-01
Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Deviney, Frank A.; Rice, Karen; Brown, Donald E.
2012-01-01
Natural resource managers require information concerning the frequency, duration, and long-term probability of occurrence of water-quality indicator (WQI) violations of defined thresholds. The timing of these threshold crossings often is hidden from the observer, who is restricted to relatively infrequent observations. Here, a model for the hidden process is linked with a model for the observations, and the parameters describing duration, return period, and long-term probability of occurrence are estimated using Bayesian methods. A simulation experiment is performed to evaluate the approach under scenarios based on the equivalent of a total monitoring period of 5-30 years and an observation frequency of 1-50 observations per year. Given constant threshold crossing rate, accuracy and precision of parameter estimates increased with longer total monitoring period and more-frequent observations. Given fixed monitoring period and observation frequency, accuracy and precision of parameter estimates increased with longer times between threshold crossings. For most cases where the long-term probability of being in violation is greater than 0.10, it was determined that at least 600 observations are needed to achieve precise estimates. An application of the approach is presented using 22 years of quasi-weekly observations of acid-neutralizing capacity from Deep Run, a stream in Shenandoah National Park, Virginia. The time series also was sub-sampled to simulate monthly and semi-monthly sampling protocols. Estimates of the long-term probability of violation were unbiased despite sampling frequency; however, the expected duration and return period were over-estimated using the sub-sampled time series with respect to the full quasi-weekly time series.
Optimal firing rate estimation
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.
2001-01-01
We define a measure for evaluating the quality of a predictive model of the behavior of a spiking neuron. This measure, information gain per spike (Is), indicates how much more information is provided by the model than if the prediction were made by specifying the neuron's average firing rate over the same time period. We apply a maximum Is criterion to optimize the performance of Gaussian smoothing filters for estimating neural firing rates. With data from bullfrog vestibular semicircular canal neurons and data from simulated integrate-and-fire neurons, the optimal bandwidth for firing rate estimation is typically similar to the average firing rate. Precise timing and average rate models are limiting cases that perform poorly. We estimate that bullfrog semicircular canal sensory neurons transmit in the order of 1 bit of stimulus-related information per spike.
Nonlinear Estimation of Discrete-Time Signals Under Random Observation Delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caballero-Aguila, R.; Jimenez-Lopez, J. D.; Hermoso-Carazo, A.
2008-11-06
This paper presents an approximation to the nonlinear least-squares estimation problem of discrete-time stochastic signals using nonlinear observations with additive white noise which can be randomly delayed by one sampling time. The observation delay is modelled by a sequence of independent Bernoulli random variables whose values, zero or one, indicate that the real observation arrives on time or it is delayed and, hence, the available measurement to estimate the signal is not up-to-date. Assuming that the state-space model generating the signal is unknown and only the covariance functions of the processes involved in the observation equation are ready for use,more » a filtering algorithm based on linear approximations of the real observations is proposed.« less
Leander, Jacob; Lundh, Torbjörn; Jirstrand, Mats
2014-05-01
In this paper we consider the problem of estimating parameters in ordinary differential equations given discrete time experimental data. The impact of going from an ordinary to a stochastic differential equation setting is investigated as a tool to overcome the problem of local minima in the objective function. Using two different models, it is demonstrated that by allowing noise in the underlying model itself, the objective functions to be minimized in the parameter estimation procedures are regularized in the sense that the number of local minima is reduced and better convergence is achieved. The advantage of using stochastic differential equations is that the actual states in the model are predicted from data and this will allow the prediction to stay close to data even when the parameters in the model is incorrect. The extended Kalman filter is used as a state estimator and sensitivity equations are provided to give an accurate calculation of the gradient of the objective function. The method is illustrated using in silico data from the FitzHugh-Nagumo model for excitable media and the Lotka-Volterra predator-prey system. The proposed method performs well on the models considered, and is able to regularize the objective function in both models. This leads to parameter estimation problems with fewer local minima which can be solved by efficient gradient-based methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Conditional Poisson models: a flexible alternative to conditional logistic case cross-over analysis.
Armstrong, Ben G; Gasparrini, Antonio; Tobias, Aurelio
2014-11-24
The time stratified case cross-over approach is a popular alternative to conventional time series regression for analysing associations between time series of environmental exposures (air pollution, weather) and counts of health outcomes. These are almost always analyzed using conditional logistic regression on data expanded to case-control (case crossover) format, but this has some limitations. In particular adjusting for overdispersion and auto-correlation in the counts is not possible. It has been established that a Poisson model for counts with stratum indicators gives identical estimates to those from conditional logistic regression and does not have these limitations, but it is little used, probably because of the overheads in estimating many stratum parameters. The conditional Poisson model avoids estimating stratum parameters by conditioning on the total event count in each stratum, thus simplifying the computing and increasing the number of strata for which fitting is feasible compared with the standard unconditional Poisson model. Unlike the conditional logistic model, the conditional Poisson model does not require expanding the data, and can adjust for overdispersion and auto-correlation. It is available in Stata, R, and other packages. By applying to some real data and using simulations, we demonstrate that conditional Poisson models were simpler to code and shorter to run than are conditional logistic analyses and can be fitted to larger data sets than possible with standard Poisson models. Allowing for overdispersion or autocorrelation was possible with the conditional Poisson model but when not required this model gave identical estimates to those from conditional logistic regression. Conditional Poisson regression models provide an alternative to case crossover analysis of stratified time series data with some advantages. The conditional Poisson model can also be used in other contexts in which primary control for confounding is by fine stratification.
Baudrot, Virgile; Preux, Sara; Ducrot, Virginie; Pave, Alain; Charles, Sandrine
2018-02-06
Toxicokinetic-toxicodynamic (TKTD) models, as the General Unified Threshold model of Survival (GUTS), provide a consistent process-based framework compared to classical dose-response models to analyze both time and concentration-dependent data sets. However, the extent to which GUTS models (Stochastic Death (SD) and Individual Tolerance (IT)) lead to a better fitting than classical dose-response model at a given target time (TT) has poorly been investigated. Our paper highlights that GUTS estimates are generally more conservative and have a reduced uncertainty through smaller credible intervals for the studied data sets than classical TT approaches. Also, GUTS models enable estimating any x% lethal concentration at any time (LC x,t ), and provide biological information on the internal processes occurring during the experiments. While both GUTS-SD and GUTS-IT models outcompete classical TT approaches, choosing one preferentially to the other is still challenging. Indeed, the estimates of survival rate over time and LC x,t are very close between both models, but our study also points out that the joint posterior distributions of SD model parameters are sometimes bimodal, while two parameters of the IT model seems strongly correlated. Therefore, the selection between these two models has to be supported by the experimental design and the biological objectives, and this paper provides some insights to drive this choice.
Omi, Takahiro; Hirata, Yoshito; Aihara, Kazuyuki
2017-07-01
A Hawkes process model with a time-varying background rate is developed for analyzing the high-frequency financial data. In our model, the logarithm of the background rate is modeled by a linear model with a relatively large number of variable-width basis functions, and the parameters are estimated by a Bayesian method. Our model can capture not only the slow time variation, such as in the intraday seasonality, but also the rapid one, which follows a macroeconomic news announcement. By analyzing the tick data of the Nikkei 225 mini, we find that (i) our model is better fitted to the data than the Hawkes models with a constant background rate or a slowly varying background rate, which have been commonly used in the field of quantitative finance; (ii) the improvement in the goodness-of-fit to the data by our model is significant especially for sessions where considerable fluctuation of the background rate is present; and (iii) our model is statistically consistent with the data. The branching ratio, which quantifies the level of the endogeneity of markets, estimated by our model is 0.41, suggesting the relative importance of exogenous factors in the market dynamics. We also demonstrate that it is critically important to appropriately model the time-dependent background rate for the branching ratio estimation.
NASA Astrophysics Data System (ADS)
Omi, Takahiro; Hirata, Yoshito; Aihara, Kazuyuki
2017-07-01
A Hawkes process model with a time-varying background rate is developed for analyzing the high-frequency financial data. In our model, the logarithm of the background rate is modeled by a linear model with a relatively large number of variable-width basis functions, and the parameters are estimated by a Bayesian method. Our model can capture not only the slow time variation, such as in the intraday seasonality, but also the rapid one, which follows a macroeconomic news announcement. By analyzing the tick data of the Nikkei 225 mini, we find that (i) our model is better fitted to the data than the Hawkes models with a constant background rate or a slowly varying background rate, which have been commonly used in the field of quantitative finance; (ii) the improvement in the goodness-of-fit to the data by our model is significant especially for sessions where considerable fluctuation of the background rate is present; and (iii) our model is statistically consistent with the data. The branching ratio, which quantifies the level of the endogeneity of markets, estimated by our model is 0.41, suggesting the relative importance of exogenous factors in the market dynamics. We also demonstrate that it is critically important to appropriately model the time-dependent background rate for the branching ratio estimation.
Lee, Michael T.; Asquith, William H.; Oden, Timothy D.
2012-01-01
In December 2005, the U.S. Geological Survey (USGS), in cooperation with the City of Houston, Texas, began collecting discrete water-quality samples for nutrients, total organic carbon, bacteria (Escherichia coli and total coliform), atrazine, and suspended sediment at two USGS streamflow-gaging stations that represent watersheds contributing to Lake Houston (08068500 Spring Creek near Spring, Tex., and 08070200 East Fork San Jacinto River near New Caney, Tex.). Data from the discrete water-quality samples collected during 2005–9, in conjunction with continuously monitored real-time data that included streamflow and other physical water-quality properties (specific conductance, pH, water temperature, turbidity, and dissolved oxygen), were used to develop regression models for the estimation of concentrations of water-quality constituents of substantial source watersheds to Lake Houston. The potential explanatory variables included discharge (streamflow), specific conductance, pH, water temperature, turbidity, dissolved oxygen, and time (to account for seasonal variations inherent in some water-quality data). The response variables (the selected constituents) at each site were nitrite plus nitrate nitrogen, total phosphorus, total organic carbon, E. coli, atrazine, and suspended sediment. The explanatory variables provide easily measured quantities to serve as potential surrogate variables to estimate concentrations of the selected constituents through statistical regression. Statistical regression also facilitates accompanying estimates of uncertainty in the form of prediction intervals. Each regression model potentially can be used to estimate concentrations of a given constituent in real time. Among other regression diagnostics, the diagnostics used as indicators of general model reliability and reported herein include the adjusted R-squared, the residual standard error, residual plots, and p-values. Adjusted R-squared values for the Spring Creek models ranged from .582–.922 (dimensionless). The residual standard errors ranged from .073–.447 (base-10 logarithm). Adjusted R-squared values for the East Fork San Jacinto River models ranged from .253–.853 (dimensionless). The residual standard errors ranged from .076–.388 (base-10 logarithm). In conjunction with estimated concentrations, constituent loads can be estimated by multiplying the estimated concentration by the corresponding streamflow and by applying the appropriate conversion factor. The regression models presented in this report are site specific, that is, they are specific to the Spring Creek and East Fork San Jacinto River streamflow-gaging stations; however, the general methods that were developed and documented could be applied to most perennial streams for the purpose of estimating real-time water quality data.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Knight, Michael J.; McGarry, Bryony M.; Jokivarsi, Kimmo T.; Gröhn, Olli H.J.; Kauppinen, Risto A.
2017-01-01
Background Objective timing of stroke in emergency departments is expected to improve patient stratification. Magnetic resonance imaging (MRI) relaxations times, T2 and T1ρ, in abnormal diffusion delineated ischaemic tissue were used as proxies of stroke time in a rat model. Methods Both ‘non-ischaemic reference’-dependent and -independent estimators were generated. Apparent diffusion coefficient (ADC), T2 and T1ρ, were sequentially quantified for up to 6 hours of stroke in rats (n = 8) at 4.7T. The ischaemic lesion was identified as a contiguous collection of voxels with low ADC. T2 and T1ρ in the ischaemic lesion and in the contralateral non-ischaemic brain tissue were determined. Differences in mean MRI relaxation times between ischaemic and non-ischaemic volumes were used to create reference-dependent estimator. For the reference-independent procedure, only the parameters associated with log-logistic fits to the T2 and T1ρ distributions within the ADC-delineated lesions were used for the onset time estimation. Result The reference-independent estimators from T2 and T1ρ data provided stroke onset time with precisions of ±32 and ±27 minutes, respectively. The reference-dependent estimators yielded respective precisions of ±47 and ±54 minutes. Conclusions A ‘non-ischaemic anatomical reference’-independent estimator for stroke onset time from relaxometric MRI data is shown to yield greater timing precision than previously obtained through reference-dependent procedures. PMID:28685128
The linear transformation model with frailties for the analysis of item response times.
Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey A
2013-02-01
The item response times (RTs) collected from computerized testing represent an underutilized source of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. In this paper, we propose a semi-parametric model for RTs, the linear transformation model with a latent speed covariate, which combines the flexibility of non-parametric modelling and the brevity as well as interpretability of parametric modelling. In this new model, the RTs, after some non-parametric monotone transformation, become a linear model with latent speed as covariate plus an error term. The distribution of the error term implicitly defines the relationship between the RT and examinees' latent speeds; whereas the non-parametric transformation is able to describe various shapes of RT distributions. The linear transformation model represents a rich family of models that includes the Cox proportional hazards model, the Box-Cox normal model, and many other models as special cases. This new model is embedded in a hierarchical framework so that both RTs and responses are modelled simultaneously. A two-stage estimation method is proposed. In the first stage, the Markov chain Monte Carlo method is employed to estimate the parametric part of the model. In the second stage, an estimating equation method with a recursive algorithm is adopted to estimate the non-parametric transformation. Applicability of the new model is demonstrated with a simulation study and a real data application. Finally, methods to evaluate the model fit are suggested. © 2012 The British Psychological Society.
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Boucher, Matthew J.
2017-01-01
Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.
A neural computational model for animal's time-to-collision estimation.
Wang, Ling; Yao, Dezhong
2013-04-17
The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.
Stage-discharge relationship in tidal channels
NASA Astrophysics Data System (ADS)
Kearney, W. S.; Mariotti, G.; Deegan, L.; Fagherazzi, S.
2016-12-01
Long-term records of the flow of water through tidal channels are essential to constrain the budgets of sediments and biogeochemical compounds in salt marshes. Statistical models which relate discharge to water level allow the estimation of such records from more easily obtained records of water stage in the channel. While there is clearly structure in the stage-discharge relationship, nonlinearity and nonstationarity of the relationship complicates the construction of statistical stage-discharge models with adequate performance for discharge estimation and uncertainty quantification. Here we compare four different types of stage-discharge models, each of which is designed to capture different characteristics of the stage-discharge relationship. We estimate and validate each of these models on a two-month long time series of stage and discharge obtained with an Acoustic Doppler Current Profiler in a salt marsh channel. We find that the best performance is obtained by models which account for the nonlinear and time-varying nature of the stage-discharge relationship. Good performance can also be obtained from a simplified version of these models which approximates the fully nonlinear and time-varying models with a piecewise linear formulation.
Robust analysis of semiparametric renewal process models
Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.
2013-01-01
Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568
Aulenbach, Brent T.
2013-01-01
A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.
Cao, Jiguo; Huang, Jianhua Z.; Wu, Hulin
2012-01-01
Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online. PMID:23155351
Leffondré, Karen; Abrahamowicz, Michal; Siemiatycki, Jack
2003-12-30
Case-control studies are typically analysed using the conventional logistic model, which does not directly account for changes in the covariate values over time. Yet, many exposures may vary over time. The most natural alternative to handle such exposures would be to use the Cox model with time-dependent covariates. However, its application to case-control data opens the question of how to manipulate the risk sets. Through a simulation study, we investigate how the accuracy of the estimates of Cox's model depends on the operational definition of risk sets and/or on some aspects of the time-varying exposure. We also assess the estimates obtained from conventional logistic regression. The lifetime experience of a hypothetical population is first generated, and a matched case-control study is then simulated from this population. We control the frequency, the age at initiation, and the total duration of exposure, as well as the strengths of their effects. All models considered include a fixed-in-time covariate and one or two time-dependent covariate(s): the indicator of current exposure and/or the exposure duration. Simulation results show that none of the models always performs well. The discrepancies between the odds ratios yielded by logistic regression and the 'true' hazard ratio depend on both the type of the covariate and the strength of its effect. In addition, it seems that logistic regression has difficulty separating the effects of inter-correlated time-dependent covariates. By contrast, each of the two versions of Cox's model systematically induces either a serious under-estimation or a moderate over-estimation bias. The magnitude of the latter bias is proportional to the true effect, suggesting that an improved manipulation of the risk sets may eliminate, or at least reduce, the bias. Copyright 2003 JohnWiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Lau, William K. M. (Technical Monitor)
2002-01-01
Validation of satellite remote-sensing methods for estimating rainfall against rain-gauge data is attractive because of the direct nature of the rain-gauge measurements. Comparisons of satellite estimates to rain-gauge data are difficult, however, because of the extreme variability of rain and the fact that satellites view large areas over a short time while rain gauges monitor small areas continuously. In this paper, a statistical model of rainfall variability developed for studies of sampling error in averages of satellite data is used to examine the impact of spatial and temporal averaging of satellite and gauge data on intercomparison results. The model parameters were derived from radar observations of rain, but the model appears to capture many of the characteristics of rain-gauge data as well. The model predicts that many months of data from areas containing a few gauges are required to validate satellite estimates over the areas, and that the areas should be of the order of several hundred km in diameter. Over gauge arrays of sufficiently high density, the optimal areas and averaging times are reduced. The possibility of using time-weighted averages of gauge data is explored.
Dynamic modeling and parameter estimation of a radial and loop type distribution system network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jun Qui; Heng Chen; Girgis, A.A.
1993-05-01
This paper presents a new identification approach to three-phase power system modeling and model reduction taking power system network as multi-input, multi-output (MIMO) processes. The model estimate can be obtained in discrete-time input-output form, discrete- or continuous-time state-space variable form, or frequency-domain impedance transfer function matrix form. An algorithm for determining the model structure of this MIMO process is described. The effect of measurement noise on the approach is also discussed. This approach has been applied on a sample system and simulation results are also presented in this paper.
NASA Astrophysics Data System (ADS)
Nair, S. P.; Righetti, R.
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2011-01-01
Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.
Zhang, Xu; Zhang, Mei-Jie; Fine, Jason
2012-01-01
With competing risks failure time data, one often needs to assess the covariate effects on the cumulative incidence probabilities. Fine and Gray proposed a proportional hazards regression model to directly model the subdistribution of a competing risk. They developed the estimating procedure for right-censored competing risks data, based on the inverse probability of censoring weighting. Right-censored and left-truncated competing risks data sometimes occur in biomedical researches. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with right-censored and left-truncated data. We adopt a new weighting technique to estimate the parameters in this model. We have derived the large sample properties of the proposed estimators. To illustrate the application of the new method, we analyze the failure time data for children with acute leukemia. In this example, the failure times for children who had bone marrow transplants were left truncated. PMID:21557288
Lee, Junkyo; Lee, Min Woo; Choi, Dongil; Cha, Dong Ik; Lee, Sunyoung; Kang, Tae Wook; Yang, Jehoon; Jo, Jaemoon; Bang, Won-Chul; Kim, Jongsik; Shin, Dongkuk
2017-12-21
The purpose of this study was to evaluate the accuracy of an active contour model for estimating the posterior ablative margin in images obtained by the fusion of real-time ultrasonography (US) and 3-dimensional (3D) US or magnetic resonance (MR) images of an experimental tumor model for radiofrequency ablation. Chickpeas (n=12) and bovine rump meat (n=12) were used as an experimental tumor model. Grayscale 3D US and T1-weighted MR images were pre-acquired for use as reference datasets. US and MR/3D US fusion was performed for one group (n=4), and US and 3D US fusion only (n=8) was performed for the other group. Half of the models in each group were completely ablated, while the other half were incompletely ablated. Hyperechoic ablation areas were extracted using an active contour model from real-time US images, and the posterior margin of the ablation zone was estimated from the anterior margin. After the experiments, the ablated pieces of bovine rump meat were cut along the electrode path and the cut planes were photographed. The US images with the estimated posterior margin were compared with the photographs and post-ablation MR images. The extracted contours of the ablation zones from 12 US fusion videos and post-ablation MR images were also matched. In the four models fused under real-time US with MR/3D US, compression from the transducer and the insertion of an electrode resulted in misregistration between the real-time US and MR images, making the estimation of the ablation zones less accurate than was achieved through fusion between real-time US and 3D US. Eight of the 12 post-ablation 3D US images were graded as good when compared with the sectioned specimens, and 10 of the 12 were graded as good in a comparison with nicotinamide adenine dinucleotide staining and histopathologic results. Estimating the posterior ablative margin using an active contour model is a feasible way of predicting the ablation area, and US/3D US fusion was more accurate than US/MR fusion.
Lee, Peter N; Fry, John S; Forey, Barbara A
2014-03-01
We quantified the decline in COPD risk following quitting using the negative exponential model, as previously carried out for other smoking-related diseases. We identified 14 blocks of RRs (from 11 studies) comparing current smokers, former smokers (by time quit) and never smokers, some studies providing sex-specific blocks. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We estimated the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block, except for one where no decline with quitting was evident, and H was not estimable. For the remaining 13 blocks, goodness-of-fit to the model was generally adequate, the combined estimate of H being 13.32 (95% CI 11.86-14.96) years. There was no heterogeneity in H, overall or by various studied sources. Sensitivity analyses allowing for reverse causation or different assumed times for the final quitting period little affected the results. The model summarizes quitting data well. The estimate of 13.32years is substantially larger than recent estimates of 4.40years for ischaemic heart disease and 4.78years for stroke, and also larger than the 9.93years for lung cancer. Heterogeneity was unimportant for COPD, unlike for the other three diseases. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
The Total Exposure Model (TEM) uses deterministic and stochastic methods to estimate the exposure of a person performing daily activities of eating, drinking, showering, and bathing. There were 250 time histories generated, by subject with activities, for the three exposure ro...
Setton, Eleanor M; Keller, C Peter; Cloutier-Fisher, Denise; Hystad, Perry W
2008-01-01
Background Chronic exposure to traffic-related air pollution is associated with a variety of health impacts in adults and recent studies show that exposure varies spatially, with some residents in a community more exposed than others. A spatial exposure simulation model (SESM) which incorporates six microenvironments (home indoor, work indoor, other indoor, outdoor, in-vehicle to work and in-vehicle other) is described and used to explore spatial variability in estimates of exposure to traffic-related nitrogen dioxide (not including indoor sources) for working people. The study models spatial variability in estimated exposure aggregated at the census tracts level for 382 census tracts in the Greater Vancouver Regional District of British Columbia, Canada. Summary statistics relating to the distributions of the estimated exposures are compared visually through mapping. Observed variations are explored through analyses of model inputs. Results Two sources of spatial variability in exposure to traffic-related nitrogen dioxide were identified. Median estimates of total exposure ranged from 8 μg/m3 to 35 μg/m3 of annual average hourly NO2 for workers in different census tracts in the study area. Exposure estimates are highest where ambient pollution levels are highest. This reflects the regional gradient of pollution in the study area and the relatively high percentage of time spent at home locations. However, for workers within the same census tract, variations were observed in the partial exposure estimates associated with time spent outside the residential census tract. Simulation modeling shows that some workers may have exposures 1.3 times higher than other workers residing in the same census tract because of time spent away from the residential census tract, and that time spent in work census tracts contributes most to the differences in exposure. Exposure estimates associated with the activity of commuting by vehicle to work were negligible, based on the relatively short amount of time spent in this microenvironment compared to other locations. We recognize that this may not be the case for pollutants other than NO2. These results represent the first time spatially disaggregated variations in exposure to traffic-related air pollution within a community have been estimated and reported. Conclusion The results suggest that while time spent in the home indoor microenvironment contributes most to between-census tract variation in estimates of annual average exposures to traffic-related NO2, time spent in the work indoor microenvironment contributes most to within-census tract variation, and time spent in transit by vehicle makes a negligible contribution. The SESM has potential as a policy evaluation tool, given input data that reflect changes in pollution levels or work flow patterns due to traffic demand management and land use development policy. PMID:18638398
Real-Time Monitoring and Prediction of the Pilot Vehicle System (PVS) Closed-Loop Stability
NASA Astrophysics Data System (ADS)
Mandal, Tanmay Kumar
Understanding human control behavior is an important step for improving the safety of future aircraft. Considerable resources are invested during the design phase of an aircraft to ensure that the aircraft has desirable handling qualities. However, human pilots exhibit a wide range of control behaviors that are a function of external stimulus, aircraft dynamics, and human psychological properties (such as workload, stress factor, confidence, and sense of urgency factor). This variability is difficult to address comprehensively during the design phase and may lead to undesirable pilot-aircraft interaction, such as pilot-induced oscillations (PIO). This creates the need to keep track of human pilot performance in real-time to monitor the pilot vehicle system (PVS) stability. This work focused on studying human pilot behavior for the longitudinal axis of a remotely controlled research aircraft and using human-in-the-loop (HuIL) simulations to obtain information about the human controlled system (HCS) stability. The work in this dissertation is divided into two main parts: PIO analysis and human control model parameters estimation. To replicate different flight conditions, this study included time delay and elevator rate limiting phenomena, typical of actuator dynamics during the experiments. To study human control behavior, this study employed the McRuer model for single-input single-output manual compensatory tasks. McRuer model is a lead-lag controller with time delay which has been shown to adequately model manual compensatory tasks. This dissertation presents a novel technique to estimate McRuer model parameters in real-time and associated validation using HuIL simulations to correctly predict HCS stability. The McRuer model parameters were estimated in real-time using a Kalman filter approach. The estimated parameters were then used to analyze the stability of the closed-loop HCS and verify them against the experimental data. Therefore, the main contribution of this dissertation is the design of an unscented Kalman filter-based algorithm to estimate McRuer model parameters in real time, and a framework to validate this algorithm for single-input single-output manual compensatory tasks to predict instabilities.
Generalized Ordinary Differential Equation Models 1
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-01-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787
Generalized Ordinary Differential Equation Models.
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-10-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.
Inverse Analysis of Irradiated NuclearMaterial Gamma Spectra via Nonlinear Optimization
NASA Astrophysics Data System (ADS)
Dean, Garrett James
Nuclear forensics is the collection of technical methods used to identify the provenance of nuclear material interdicted outside of regulatory control. Techniques employed in nuclear forensics include optical microscopy, gas chromatography, mass spectrometry, and alpha, beta, and gamma spectrometry. This dissertation focuses on the application of inverse analysis to gamma spectroscopy to estimate the history of pulse irradiated nuclear material. Previous work in this area has (1) utilized destructive analysis techniques to supplement the nondestructive gamma measurements, and (2) been applied to samples composed of spent nuclear fuel with long irradiation and cooling times. Previous analyses have employed local nonlinear solvers, simple empirical models of gamma spectral features, and simple detector models of gamma spectral features. The algorithm described in this dissertation uses a forward model of the irradiation and measurement process within a global nonlinear optimizer to estimate the unknown irradiation history of pulse irradiated nuclear material. The forward model includes a detector response function for photopeaks only. The algorithm uses a novel hybrid global and local search algorithm to quickly estimate the irradiation parameters, including neutron fluence, cooling time and original composition. Sequential, time correlated series of measurements are used to reduce the uncertainty in the estimated irradiation parameters. This algorithm allows for in situ measurements of interdicted irradiated material. The increase in analysis speed comes with a decrease in information that can be determined, but the sample fluence, cooling time, and composition can be determined within minutes of a measurement. Furthermore, pulse irradiated nuclear material has a characteristic feature that irradiation time and flux cannot be independently estimated. The algorithm has been tested against pulse irradiated samples of pure special nuclear material with cooling times of four minutes to seven hours. The algorithm described is capable of determining the cooling time and fluence the sample was exposed to within 10% as well as roughly estimating the relative concentrations of nuclides present in the original composition.
CrowdWater - Can people observe what models need?
NASA Astrophysics Data System (ADS)
van Meerveld, I. H. J.; Seibert, J.; Vis, M.; Etter, S.; Strobl, B.
2017-12-01
CrowdWater (www.crowdwater.ch) is a citizen science project that explores the usefulness of crowd-sourced data for hydrological model calibration and prediction. Hydrological models are usually calibrated based on observed streamflow data but it is likely easier for people to estimate relative stream water levels, such as the water level above or below a rock, than streamflow. Relative stream water levels may, therefore, be a more suitable variable for citizen science projects than streamflow. In order to test this assumption, we held surveys near seven different sized rivers in Switzerland and asked more than 450 volunteers to estimate the water level class based on a picture with a virtual staff gauge. The results show that people can generally estimate the relative water level well, although there were also a few outliers. We also asked the volunteers to estimate streamflow based on the stick method. The median estimated streamflow was close to the observed streamflow but the spread in the streamflow estimates was large and there were very large outliers, suggesting that crowd-based streamflow data is highly uncertain. In order to determine the potential value of water level class data for model calibration, we converted streamflow time series for 100 catchments in the US to stream level class time series and used these to calibrate the HBV model. The model was then validated using the streamflow data. The results of this modeling exercise show that stream level class data are useful for constraining a simple runoff model. Time series of only two stream level classes, e.g. above or below a rock in the stream, were already informative, especially when the class boundary was chosen towards the highest stream levels. There was hardly any improvement in model performance when more than five water level classes were used. This suggests that if crowd-sourced stream level observations are available for otherwise ungauged catchments, these data can be used to constrain a simple runoff model and to generate simulated streamflow time series from the level observations.
NASA Astrophysics Data System (ADS)
Bargaoui, Zoubeida Kebaili; Bardossy, Andràs
2015-10-01
The paper aims to develop researches on the spatial variability of heavy rainfall events estimation using spatial copula analysis. To demonstrate the methodology, short time resolution rainfall time series from Stuttgart region are analyzed. They are constituted by rainfall observations on continuous 30 min time scale recorded over a network composed by 17 raingages for the period July 1989-July 2004. The analysis is performed aggregating the observations from 30 min up to 24 h. Two parametric bivariate extreme copula models, the Husler-Reiss model and the Gumbel model are investigated. Both involve a single parameter to be estimated. Thus, model fitting is operated for every pair of stations for a giving time resolution. A rainfall threshold value representing a fixed rainfall quantile is adopted for model inference. Generalized maximum pseudo-likelihood estimation is adopted with censoring by analogy with methods of univariate estimation combining historical and paleoflood information with systematic data. Only pairs of observations greater than the threshold are assumed as systematic data. Using the estimated copula parameter, a synthetic copula field is randomly generated and helps evaluating model adequacy which is achieved using Kolmogorov Smirnov distance test. In order to assess dependence or independence in the upper tail, the extremal coefficient which characterises the tail of the joint bivariate distribution is adopted. Hence, the extremal coefficient is reported as a function of the interdistance between stations. If it is less than 1.7, stations are interpreted as dependent in the extremes. The analysis of the fitted extremal coefficients with respect to stations inter distance highlights two regimes with different dependence structures: a short spatial extent regime linked to short duration intervals (from 30 min to 6 h) with an extent of about 8 km and a large spatial extent regime related to longer rainfall intervals (from 12 h to 24 h) with an extent of 34 to 38 km.
Demographic estimation methods for plants with unobservable life-states
Kery, M.; Gregg, K.B.; Schaub, M.
2005-01-01
Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous computer algebra methods to identify the parameters that are estimable in principle. As life-states are a prominent feature in plant life cycles, multi state capture-recapture models are a natural framework for analysing population dynamics of plants with dormancy.
Xia, Peng; Hu, Jie; Peng, Yinghong
2017-10-25
A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Nonparametric Transfer Function Models
Liu, Jun M.; Chen, Rong; Yao, Qiwei
2009-01-01
In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584
Constructing stage-structured matrix population models from life tables: comparison of methods
Diaz-Lopez, Jasmin
2017-01-01
A matrix population model is a convenient tool for summarizing per capita survival and reproduction rates (collectively vital rates) of a population and can be used for calculating an asymptotic finite population growth rate (λ) and generation time. These two pieces of information can be used for determining the status of a threatened species. The use of stage-structured population models has increased in recent years, and the vital rates in such models are often estimated using a life table analysis. However, potential bias introduced when converting age-structured vital rates estimated from a life table into parameters for a stage-structured population model has not been assessed comprehensively. The objective of this study was to investigate the performance of methods for such conversions using simulated life histories of organisms. The underlying models incorporate various types of life history and true population growth rates of varying levels. The performance was measured by comparing differences in λ and the generation time calculated using the Euler-Lotka equation, age-structured population matrices, and several stage-structured population matrices that were obtained by applying different conversion methods. The results show that the discretization of age introduces only small bias in λ or generation time. Similarly, assuming a fixed age of maturation at the mean age of maturation does not introduce much bias. However, aggregating age-specific survival rates into a stage-specific survival rate and estimating a stage-transition rate can introduce substantial bias depending on the organism’s life history type and the true values of λ. In order to aggregate survival rates, the use of the weighted arithmetic mean was the most robust method for estimating λ. Here, the weights are given by survivorship curve after discounting with λ. To estimate a stage-transition rate, matching the proportion of individuals transitioning, with λ used for discounting the rate, was the best approach. However, stage-structured models performed poorly in estimating generation time, regardless of the methods used for constructing the models. Based on the results, we recommend using an age-structured matrix population model or the Euler-Lotka equation for calculating λ and generation time when life table data are available. Then, these age-structured vital rates can be converted into a stage-structured model for further analyses. PMID:29085763
Constructing stage-structured matrix population models from life tables: comparison of methods.
Fujiwara, Masami; Diaz-Lopez, Jasmin
2017-01-01
A matrix population model is a convenient tool for summarizing per capita survival and reproduction rates (collectively vital rates) of a population and can be used for calculating an asymptotic finite population growth rate ( λ ) and generation time. These two pieces of information can be used for determining the status of a threatened species. The use of stage-structured population models has increased in recent years, and the vital rates in such models are often estimated using a life table analysis. However, potential bias introduced when converting age-structured vital rates estimated from a life table into parameters for a stage-structured population model has not been assessed comprehensively. The objective of this study was to investigate the performance of methods for such conversions using simulated life histories of organisms. The underlying models incorporate various types of life history and true population growth rates of varying levels. The performance was measured by comparing differences in λ and the generation time calculated using the Euler-Lotka equation, age-structured population matrices, and several stage-structured population matrices that were obtained by applying different conversion methods. The results show that the discretization of age introduces only small bias in λ or generation time. Similarly, assuming a fixed age of maturation at the mean age of maturation does not introduce much bias. However, aggregating age-specific survival rates into a stage-specific survival rate and estimating a stage-transition rate can introduce substantial bias depending on the organism's life history type and the true values of λ . In order to aggregate survival rates, the use of the weighted arithmetic mean was the most robust method for estimating λ . Here, the weights are given by survivorship curve after discounting with λ . To estimate a stage-transition rate, matching the proportion of individuals transitioning, with λ used for discounting the rate, was the best approach. However, stage-structured models performed poorly in estimating generation time, regardless of the methods used for constructing the models. Based on the results, we recommend using an age-structured matrix population model or the Euler-Lotka equation for calculating λ and generation time when life table data are available. Then, these age-structured vital rates can be converted into a stage-structured model for further analyses.
Inferring invasive species abundance using removal data from management actions.
Davis, Amy J; Hooten, Mevin B; Miller, Ryan S; Farnsworth, Matthew L; Lewis, Jesse; Moxcey, Michael; Pepin, Kim M
2016-10-01
Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480-19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates. © 2016 by the Ecological Society of America.
Viallon, Vivian; Latouche, Aurélien
2011-03-01
Finding out biomarkers and building risk scores to predict the occurrence of survival outcomes is a major concern of clinical epidemiology, and so is the evaluation of prognostic models. In this paper, we are concerned with the estimation of the time-dependent AUC--area under the receiver-operating curve--which naturally extends standard AUC to the setting of survival outcomes and enables to evaluate the discriminative power of prognostic models. We establish a simple and useful relation between the predictiveness curve and the time-dependent AUC--AUC(t). This relation confirms that the predictiveness curve is the key concept for evaluating calibration and discrimination of prognostic models. It also highlights that accurate estimates of the conditional absolute risk function should yield accurate estimates for AUC(t). From this observation, we derive several estimators for AUC(t) relying on distinct estimators of the conditional absolute risk function. An empirical study was conducted to compare our estimators with the existing ones and assess the effect of model misspecification--when estimating the conditional absolute risk function--on the AUC(t) estimation. We further illustrate the methodology on the Mayo PBC and the VA lung cancer data sets. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reconstructing the hidden states in time course data of stochastic models.
Zimmer, Christoph
2015-11-01
Parameter estimation is central for analyzing models in Systems Biology. The relevance of stochastic modeling in the field is increasing. Therefore, the need for tailored parameter estimation techniques is increasing as well. Challenges for parameter estimation are partial observability, measurement noise, and the computational complexity arising from the dimension of the parameter space. This article extends the multiple shooting for stochastic systems' method, developed for inference in intrinsic stochastic systems. The treatment of extrinsic noise and the estimation of the unobserved states is improved, by taking into account the correlation between unobserved and observed species. This article demonstrates the power of the method on different scenarios of a Lotka-Volterra model, including cases in which the prey population dies out or explodes, and a Calcium oscillation system. Besides showing how the new extension improves the accuracy of the parameter estimates, this article analyzes the accuracy of the state estimates. In contrast to previous approaches, the new approach is well able to estimate states and parameters for all the scenarios. As it does not need stochastic simulations, it is of the same order of speed as conventional least squares parameter estimation methods with respect to computational time. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia
NASA Astrophysics Data System (ADS)
Hussin, F. N.; Rahman, H. A.; Bahar, A.
2017-09-01
Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.
NASA Astrophysics Data System (ADS)
Nejad, S.; Gladwin, D. T.; Stone, D. A.
2016-06-01
This paper presents a systematic review for the most commonly used lumped-parameter equivalent circuit model structures in lithium-ion battery energy storage applications. These models include the Combined model, Rint model, two hysteresis models, Randles' model, a modified Randles' model and two resistor-capacitor (RC) network models with and without hysteresis included. Two variations of the lithium-ion cell chemistry, namely the lithium-ion iron phosphate (LiFePO4) and lithium nickel-manganese-cobalt oxide (LiNMC) are used for testing purposes. The model parameters and states are recursively estimated using a nonlinear system identification technique based on the dual Extended Kalman Filter (dual-EKF) algorithm. The dynamic performance of the model structures are verified using the results obtained from a self-designed pulsed-current test and an electric vehicle (EV) drive cycle based on the New European Drive Cycle (NEDC) profile over a range of operating temperatures. Analysis on the ten model structures are conducted with respect to state-of-charge (SOC) and state-of-power (SOP) estimation with erroneous initial conditions. Comparatively, both RC model structures provide the best dynamic performance, with an outstanding SOC estimation accuracy. For those cell chemistries with large inherent hysteresis levels (e.g. LiFePO4), the RC model with only one time constant is combined with a dynamic hysteresis model to further enhance the performance of the SOC estimator.
Bender, David A.; Asher, William E.; Zogorski, John S.
2003-01-01
This report documents LakeVOC, a model to estimate volatile organic compound (VOC) concentrations in lakes and reservoirs. LakeVOC represents the lake or reservoir as a two-layer system and estimates VOC concentrations in both the epilimnion and hypolimnion. The air-water flux of a VOC is characterized in LakeVOC in terms of the two-film model of air-water exchange. LakeVOC solves the system of coupled differential equations for the VOC concentration in the epilimnion, the VOC concentration in the hypolimnion, the total mass of the VOC in the lake, the volume of the epilimnion, and the volume of the hypolimnion. A series of nine simulations were conducted to verify LakeVOC representation of mixing, dilution, and gas exchange characteristics in a hypothetical lake, and two additional estimates of lake volume and MTBE concentrations were done in an actual reservoir under environmental conditions. These 11 simulations showed that LakeVOC correctly handled mixing, dilution, and gas exchange. The model also adequately estimated VOC concentrations within the epilimnion in an actual reservoir with daily input parameters. As the parameter-input time scale increased (from daily to weekly to monthly, for example), the differences between the measured-averaged concentrations and the model-estimated concentrations generally increased, especially for the hypolimnion. This may be because as the time scale is increased from daily to weekly to monthly, the averaging of model inputs may cause a loss of detail in the model estimates.
The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates
ERIC Educational Resources Information Center
Sivo, Stephen; Fan, Xitao; Witta, Lea
2005-01-01
The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…
Wang, Qian; Molenaar, Peter; Harsh, Saurabh; Freeman, Kenneth; Xie, Jinyu; Gold, Carol; Rovine, Mike; Ulbrecht, Jan
2014-03-01
An essential component of any artificial pancreas is on the prediction of blood glucose levels as a function of exogenous and endogenous perturbations such as insulin dose, meal intake, and physical activity and emotional tone under natural living conditions. In this article, we present a new data-driven state-space dynamic model with time-varying coefficients that are used to explicitly quantify the time-varying patient-specific effects of insulin dose and meal intake on blood glucose fluctuations. Using the 3-variate time series of glucose level, insulin dose, and meal intake of an individual type 1 diabetic subject, we apply an extended Kalman filter (EKF) to estimate time-varying coefficients of the patient-specific state-space model. We evaluate our empirical modeling using (1) the FDA-approved UVa/Padova simulator with 30 virtual patients and (2) clinical data of 5 type 1 diabetic patients under natural living conditions. Compared to a forgetting-factor-based recursive ARX model of the same order, the EKF model predictions have higher fit, and significantly better temporal gain and J index and thus are superior in early detection of upward and downward trends in glucose. The EKF based state-space model developed in this article is particularly suitable for model-based state-feedback control designs since the Kalman filter estimates the state variable of the glucose dynamics based on the measured glucose time series. In addition, since the model parameters are estimated in real time, this model is also suitable for adaptive control. © 2014 Diabetes Technology Society.
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2015-12-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
Dornburg, Alex; Brandley, Matthew C; McGowen, Michael R; Near, Thomas J
2012-02-01
Various nucleotide substitution models have been developed to accommodate among lineage rate heterogeneity, thereby relaxing the assumptions of the strict molecular clock. Recently developed "uncorrelated relaxed clock" and "random local clock" (RLC) models allow decoupling of nucleotide substitution rates between descendant lineages and are thus predicted to perform better in the presence of lineage-specific rate heterogeneity. However, it is uncertain how these models perform in the presence of punctuated shifts in substitution rate, especially between closely related clades. Using cetaceans (whales and dolphins) as a case study, we test the performance of these two substitution models in estimating both molecular rates and divergence times in the presence of substantial lineage-specific rate heterogeneity. Our RLC analyses of whole mitochondrial genome alignments find evidence for up to ten clade-specific nucleotide substitution rate shifts in cetaceans. We provide evidence that in the uncorrelated relaxed clock framework, a punctuated shift in the rate of molecular evolution within a subclade results in posterior rate estimates that are either misled or intermediate between the disparate rate classes present in baleen and toothed whales. Using simulations, we demonstrate abrupt changes in rate isolated to one or a few lineages in the phylogeny can mislead rate and age estimation, even when the node of interest is calibrated. We further demonstrate how increasing prior age uncertainty can bias rate and age estimates, even while the 95% highest posterior density around age estimates decreases; in other words, increased precision for an inaccurate estimate. We interpret the use of external calibrations in divergence time studies in light of these results, suggesting that rate shifts at deep time scales may mislead inferences of absolute molecular rates and ages.
Estimation of variance in Cox's regression model with shared gamma frailties.
Andersen, P K; Klein, J P; Knudsen, K M; Tabanera y Palacios, R
1997-12-01
The Cox regression model with a shared frailty factor allows for unobserved heterogeneity or for statistical dependence between the observed survival times. Estimation in this model when the frailties are assumed to follow a gamma distribution is reviewed, and we address the problem of obtaining variance estimates for regression coefficients, frailty parameter, and cumulative baseline hazards using the observed nonparametric information matrix. A number of examples are given comparing this approach with fully parametric inference in models with piecewise constant baseline hazards.
Adamski, Alys; Bertolli, Jeanne; Castañeda-Orjuela, Carlos; Devine, Owen J; Johansson, Michael A; Duarte, Maritza Adegnis Gonzalez; Farr, Sherry L; Tinker, Sarah C; Reyes, Marcela Maria Mercado; Tong, Van T; Garcia, Oscar Eduardo Pacheco; Valencia, Diana; Ortiz, Diego Alberto Cuellar; Honein, Margaret A; Jamieson, Denise J; Martínez, Martha Lucía Ospina; Gilboa, Suzanne M
2018-06-01
Colombia experienced a Zika virus (ZIKV) outbreak in 2015-2016. To assist with planning for medical and supportive services for infants affected by prenatal ZIKV infection, we used a model to estimate the number of pregnant women infected with ZIKV and the number of infants with congenital microcephaly from August 2015 to August 2017. We used nationally reported cases of symptomatic ZIKV disease among pregnant women and information from the literature on the percent of asymptomatic infections to estimate the number of pregnant women with ZIKV infection occurring August 2015-December 2016. We then estimated the number of infants with congenital microcephaly expected to occur August 2015-August 2017. To compare to the observed counts of infants with congenital microcephaly due to all causes reported through the national birth defects surveillance system, the model was time limited to produce estimates for February-November 2016. We estimated 1140-2160 (interquartile range [IQR]) infants with congenital microcephaly in Colombia, during August 2015-August 2017, whereas 340-540 infants with congenital microcephaly would be expected in the absence of ZIKV. Based on the time limited version of the model, for February-November 2016, we estimated 650-1410 infants with congenital microcephaly in Colombia. The 95% uncertainty interval for the latter estimate encompasses the 476 infants with congenital microcephaly reported during that approximate time frame based on national birth defects surveillance. Based on modeled estimates, ZIKV infection during pregnancy in Colombia could lead to 3-4 times as many infants with congenital microcephaly in 2015-2017 as would have been expected in the absence of the ZIKV outbreak. This publication was made possible through support provided by the Bureau for Global Health, U.S. Agency for International Development under the terms of an Interagency Agreement with Centers for Disease Control and Prevention. Published by Elsevier Ltd.
Falgreen, Steffen; Laursen, Maria Bach; Bødker, Julie Støve; Kjeldsen, Malene Krag; Schmitz, Alexander; Nyegaard, Mette; Johnsen, Hans Erik; Dybkær, Karen; Bøgsted, Martin
2014-06-05
In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves' dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological interpretations. Time independent summary statistics may aid the understanding of drugs' action mechanism on tumour cells and potentially renew previous drug sensitivity evaluation studies.
2014-01-01
Background In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves’ dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. Results First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological interpretations. Conclusion Time independent summary statistics may aid the understanding of drugs’ action mechanism on tumour cells and potentially renew previous drug sensitivity evaluation studies. PMID:24902483
NASA Astrophysics Data System (ADS)
Kirchner, J. W.
2016-01-01
Methods for estimating mean transit times from chemical or isotopic tracers (such as Cl-, δ18O, or δ2H) commonly assume that catchments are stationary (i.e., time-invariant) and homogeneous. Real catchments are neither. In a companion paper, I showed that catchment mean transit times estimated from seasonal tracer cycles are highly vulnerable to aggregation error, exhibiting strong bias and large scatter in spatially heterogeneous catchments. I proposed the young water fraction, which is virtually immune to aggregation error under spatial heterogeneity, as a better measure of transit times. Here I extend this analysis by exploring how nonstationarity affects mean transit times and young water fractions estimated from seasonal tracer cycles, using benchmark tests based on a simple two-box model. The model exhibits complex nonstationary behavior, with striking volatility in tracer concentrations, young water fractions, and mean transit times, driven by rapid shifts in the mixing ratios of fluxes from the upper and lower boxes. The transit-time distribution in streamflow becomes increasingly skewed at higher discharges, with marked increases in the young water fraction and decreases in the mean water age, reflecting the increased dominance of the upper box at higher flows. This simple two-box model exhibits strong equifinality, which can be partly resolved by simple parameter transformations. However, transit times are primarily determined by residual storage, which cannot be constrained through hydrograph calibration and must instead be estimated by tracer behavior. Seasonal tracer cycles in the two-box model are very poor predictors of mean transit times, with typical errors of several hundred percent. However, the same tracer cycles predict time-averaged young water fractions (Fyw) within a few percent, even in model catchments that are both nonstationary and spatially heterogeneous (although they may be biased by roughly 0.1-0.2 at sites where strong precipitation seasonality is correlated with precipitation tracer concentrations). Flow-weighted fits to the seasonal tracer cycles accurately predict the flow-weighted average Fyw in streamflow, while unweighted fits to the seasonal tracer cycles accurately predict the unweighted average Fyw. Young water fractions can also be estimated separately for individual flow regimes, again with a precision of a few percent, allowing direct determination of how shifts in a catchment's hydraulic regime alter the fraction of water reaching the stream by fast flowpaths. One can also estimate the chemical composition of idealized "young water" and "old water" end-members, using relationships between young water fractions and solute concentrations across different flow regimes. These results demonstrate that mean transit times cannot be estimated reliably from seasonal tracer cycles and that, by contrast, the young water fraction is a robust and useful metric of transit times, even in catchments that exhibit strong nonstationarity and heterogeneity.
Using heat as a tracer to estimate spatially distributed mean residence times in the hyporheic zone
NASA Astrophysics Data System (ADS)
Naranjo, R. C.; Pohll, G. M.; Stone, M. C.; Niswonger, R. G.; McKay, W. A.
2013-12-01
Biogeochemical reactions that occur in the hyporheic zone are highly dependent on the time solutes are in contact with riverbed sediments. In this investigation, we developed a two-dimensional longitudinal flow and solute transport model to estimate the spatial distribution of mean residence time in the hyporheic zone along a riffle-pool sequence to gain a better understanding of nitrogen reactions. A flow and transport model was developed to estimate spatially distributed mean residence times and was calibrated using observations of temperature and pressure. The approach used in this investigation accounts for the mixing of ages given advection and dispersion. Uncertainty of flow and transport parameters was evaluated using standard Monte-Carlo analysis and the generalized likelihood uncertainty estimation method. Results of parameter estimation indicate the presence of a low-permeable zone in the riffle area that induced horizontal flow at shallow depth within the riffle area. This establishes shallow and localized flow paths and limits deep vertical exchange. From the optimal model, mean residence times were found to be relatively long (9 - 40 days). The uncertainty of hydraulic conductivity resulted in a mean interquartile range of 13 days across all piezometers and was reduced by 24% with the inclusion of temperature and pressure observations. To a lesser extent, uncertainty in streambed porosity and dispersivity resulted in a mean interquartile range of 2.2- and 4.7 days, respectively. Alternative conceptual models demonstrate the importance of accounting for the spatial distribution of hydraulic conductivity in simulating mean residence times in a riffle-pool sequence. It is demonstrated that spatially variable mean residence time beneath a riffle-pool system does not conform to simple conceptual models of hyporheic flow through a riffle-pool sequence. Rather, the mixing behavior between the river and the hyporheic flow are largely controlled by layered heterogeneity and anisotropy of the subsurface.
NASA Astrophysics Data System (ADS)
Langbein, J. O.
2016-12-01
Most time series of geophysical phenomena are contaminated with temporally correlated errors that limit the precision of any derived parameters. Ignoring temporal correlations will result in biased and unrealistic estimates of velocity and its error estimated from geodetic position measurements. Obtaining better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model when there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fn , with frequency, f. Time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. [2012] demonstrate one technique that substantially increases the efficiency of the MLE methods, but it provides only an approximate solution for power-law indices greater than 1.0. That restriction can be removed by simply forming a data-filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified and it provides robust results for a wide range of power-law indices. With the new formulation, the efficiency is typically improved by about a factor of 8 over previous MLE algorithms [Langbein, 2004]. The new algorithm can be downloaded at http://earthquake.usgs.gov/research/software/#est_noise. The main program provides a number of basic functions that can be used to model the time-dependent part of time series and a variety of models that describe the temporal covariance of the data. In addition, the program is packaged with a few companion programs and scripts that can help with data analysis and with interpretation of the noise modeling.
Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode
2008-12-01
To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.
Real Time Precise Point Positioning: Preliminary Results for the Brazilian Region
NASA Astrophysics Data System (ADS)
Marques, Haroldo; Monico, João.; Hirokazu Shimabukuro, Milton; Aquino, Marcio
2010-05-01
GNSS positioning can be carried out in relative or absolute approach. In the last years, more attention has been driven to the real time precise point positioning (PPP). To achieve centimeter accuracy with this method in real time it is necessary to have available the satellites precise coordinates as well as satellites clocks corrections. The coordinates can be used from the predicted IGU ephemeris, but the satellites clocks must be estimated in a real time. It can be made from a GNSS network as can be seen from EUREF Permanent Network. The infra-structure to realize the PPP in real time is being available in Brazil through the Brazilian Continuous Monitoring Network (RBMC) together with the Sao Paulo State GNSS network which are transmitting GNSS data using NTRIP (Networked Transport of RTCM via Internet Protocol) caster. Based on this information it was proposed a PhD thesis in the Univ. Estadual Paulista (UNESP) aiming to investigate and develop the methodology to estimate the satellites clocks and realize PPP in real time. Then, software is being developed to process GNSS data in the real time PPP mode. A preliminary version of the software was called PPP_RT and is able to process GNSS code and phase data using precise ephemeris and satellites clocks. The PPP processing can be accomplished considering the absolute satellite antenna Phase Center Variation (PCV), Ocean Tide Loading (OTL), Earth Body Tide, among others. The first order ionospheric effects can be eliminated or minimized by ion-free combination or parameterized in the receiver-satellite direction using a stochastic process, e.g. random walk or white noise. In the case of ionosphere estimation, a pseudo-observable is introduced in the mathematical model for each satellite and the initial value can be computed from Klobuchar model or from Global Ionospheric Map (GIM). The adjustment is realized in the recursive mode and the DIA (Detection Identification and Adaptation) is used for quality control. In this paper our proposition is to present the mathematical models implemented in the PPP_RT software and some proposal to accomplish the PPP in real time as for example using tropospheric model from Brazilian Numerical Weather Forecast Model (BNWFM) and estimating the ionosphere using stochastic process. GPS data sample from the Brazilian region was processed using the PPP_RT software considering periods under low and high ionospheric activities and the results estimating the ionosphere were compared with the ion-free combination. The PPP results also were analyzed considering the strategy of the troposphere estimation, Hopfield model or using the BNWFM. For the troposphere case, the values from BNWFM can reach similar results when estimating the troposphere. For the ionosphere case, the results have shown that ionosphere estimation can improve the time convergence of the PPP processing what is very important for PPP in real time.
Crop biomass and evapotranspiration estimation using SPOT and Formosat-2 Data
NASA Astrophysics Data System (ADS)
Veloso, Amanda; Demarez, Valérie; Ceschia, Eric; Claverie, Martin
2013-04-01
The use of crop models allows simulating plant development, growth and yield under different environmental and management conditions. When combined with high spatial and temporal resolution remote sensing data, these models provide new perspectives for crop monitoring at regional scale. We propose here an approach to estimate time courses of dry aboveground biomass, yield and evapotranspiration (ETR) for summer (maize, sunflower) and winter crops (wheat) by assimilating Green Area Index (GAI) data, obtained from satellite observations, into a simple crop model. Only high spatial resolution and gap-free satellite time series can provide enough information for efficient crop monitoring applications. The potential of remote sensing data is often limited by cloud cover and/or gaps in observation. Data from different sensor systems need then to be combined. For this work, we employed a unique set of Formosat-2 and SPOT images (164 images) and in-situ measurements, acquired from 2006 to 2010 in southwest France. Among the several land surface biophysical variables accessible from satellite observations, the GAI is the one that has a key role in soil-plant-atmosphere interactions and in biomass accumulation process. Many methods have been developed to relate GAI to optical remote sensing signal. Here, seasonal dynamics of remotely sensed GAI were estimated by applying a method based on the inversion of a radiative transfer model using artificial neural networks. The modelling approach is based on the Simple Algorithm for Yield and Evapotranspiration estimate (SAFYE) model, which couples the FAO-56 model with an agro-meteorological model, based on Monteith's light-use efficiency theory. The SAFYE model is a daily time step crop model that simulates time series of GAI, dry aboveground biomass, grain yield and ETR. Crop and soil model parameters were determined using both in-situ measurements and values found in the literature. Phenological parameters were calibrated by the assimilation of the remotely sensed GAI time series. The calibration process led to accurate spatial estimates of GAI, ETR as well as of biomass and yield over the study area (24 km x 24 km window). The results highlight the interest of using a combined approach (crop model coupled with high spatial and temporal resolution remote sensing data) for the estimation of agronomical variables. At local scale, the model reproduced correctly the biomass production and ETR for summer crops (with relative RMSE of 29% and 35%, respectively). At regional scale, estimated yield and water requirement for irrigation were compared to regional statistics of yield and irrigation inventories provided by the local water agency. Results showed good agreements for inter-annual dynamics of yield estimates. Differences between water requirement for irrigation and actual supply were lower than 10% and inter-annual variability was well represented as well. The work, initially focused on summer crops, is being adapted to winter crops.
Real Time Data Management for Estimating Probabilities of Incidents and Near Misses
NASA Astrophysics Data System (ADS)
Stanitsas, P. D.; Stephanedes, Y. J.
2011-08-01
Advances in real-time data collection, data storage and computational systems have led to development of algorithms for transport administrators and engineers that improve traffic safety and reduce cost of road operations. Despite these advances, problems in effectively integrating real-time data acquisition, processing, modelling and road-use strategies at complex intersections and motorways remain. These are related to increasing system performance in identification, analysis, detection and prediction of traffic state in real time. This research develops dynamic models to estimate the probability of road incidents, such as crashes and conflicts, and incident-prone conditions based on real-time data. The models support integration of anticipatory information and fee-based road use strategies in traveller information and management. Development includes macroscopic/microscopic probabilistic models, neural networks, and vector autoregressions tested via machine vision at EU and US sites.
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Nuccio, V.F.; Johnson, S.Y.; Schenk, C.J.
1989-01-01
Paleogeothermal gradients and timing of oil generation for the Lower and Middle Pennsylvanian Belden Formation have been estimated for four locations in the Eagle Basin of northwestern Colorado, by comparing measured vitrinite reflectance with maturity modeling. Two thermal models were made for each location: one assumes a constant paleogeothermal gradient through time while the other is a two-stage model with changing paleogeothermal gradients. The two-stage paleogeothermal gradient scenario is considered more geologically realistic and is used to estimate the timing of oil generation throughout the Eagle basin. From the data and interpretations, one would expect Belden oil to be found in either upper Paleozoic or Mesozoic reservoir rocks. -Authors
And the first one now will later be last: Time-reversal in cormack-jolly-seber models
Nichols, James D.
2016-01-01
The models of Cormack, Jolly and Seber (CJS) are remarkable in providing a rich set of inferences about population survival, recruitment, abundance and even sampling probabilities from a seemingly limited data source: a matrix of 1's and 0's reflecting animal captures and recaptures at multiple sampling occasions. Survival and sampling probabilities are estimated directly in CJS models, whereas estimators for recruitment and abundance were initially obtained as derived quantities. Various investigators have noted that just as standard modeling provides direct inferences about survival, reversing the time order of capture history data permits direct modeling and inference about recruitment. Here we review the development of reverse-time modeling efforts, emphasizing the kinds of inferences and questions to which they seem well suited.
Lech, Karolina; Liu, Fan; Ackermann, Katrin; Revell, Victoria L; Lao, Oscar; Skene, Debra J; Kayser, Manfred
2016-03-01
Determining the time a biological trace was left at a scene of crime reflects a crucial aspect of forensic investigations as - if possible - it would permit testing the sample donor's alibi directly from the trace evidence, helping to link (or not) the DNA-identified sample donor with the crime event. However, reliable and robust methodology is lacking thus far. In this study, we assessed the suitability of mRNA for the purpose of estimating blood deposition time, and its added value relative to melatonin and cortisol, two circadian hormones we previously introduced for this purpose. By analysing 21 candidate mRNA markers in blood samples from 12 individuals collected around the clock at 2h intervals for 36h under real-life, controlled conditions, we identified 11 mRNAs with statistically significant expression rhythms. We then used these 11 significantly rhythmic mRNA markers, with and without melatonin and cortisol also analysed in these samples, to establish statistical models for predicting day/night time categories. We found that although in general mRNA-based estimation of time categories was less accurate than hormone-based estimation, the use of three mRNA markers HSPA1B, MKNK2 and PER3 together with melatonin and cortisol generally enhanced the time prediction accuracy relative to the use of the two hormones alone. Our data best support a model that by using these five molecular biomarkers estimates three time categories, i.e. night/early morning, morning/noon, and afternoon/evening with prediction accuracies expressed as AUC values of 0.88, 0.88, and 0.95, respectively. For the first time, we demonstrate the value of mRNA for blood deposition timing and introduce a statistical model for estimating day/night time categories based on molecular biomarkers, which shall be further validated with additional samples in the future. Moreover, our work provides new leads for molecular approaches on time of death estimation using the significantly rhythmic mRNA markers established here. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Long-range persistence in the global mean surface temperature and the global warming "time bomb"
NASA Astrophysics Data System (ADS)
Rypdal, M.; Rypdal, K.
2012-04-01
Detrended Fluctuation Analysis (DFA) and Maximum Likelihood Estimations (MLE) based on instrumental data over the last 160 years indicate that there is Long-Range Persistence (LRP) in Global Mean Surface Temperature (GMST) on time scales of months to decades. The persistence is much higher in sea surface temperature than in land temperatures. Power spectral analysis of multi-model, multi-ensemble runs of global climate models indicate further that this persistence may extend to centennial and maybe even millennial time-scales. We also support these conclusions by wavelet variogram analysis, DFA, and MLE of Northern hemisphere mean surface temperature reconstructions over the last two millennia. These analyses indicate that the GMST is a strongly persistent noise with Hurst exponent H>0.9 on time scales from decades up to at least 500 years. We show that such LRP can be very important for long-term climate prediction and for the establishment of a "time bomb" in the climate system due to a growing energy imbalance caused by the slow relaxation to radiative equilibrium under rising anthropogenic forcing. We do this by the construction of a multi-parameter dynamic-stochastic model for the GMST response to deterministic and stochastic forcing, where LRP is represented by a power-law response function. Reconstructed data for total forcing and GMST over the last millennium are used with this model to estimate trend coefficients and Hurst exponent for the GMST on multi-century time scale by means of MLE. Ensembles of solutions generated from the stochastic model also allow us to estimate confidence intervals for these estimates.
Damiano, Diane L.; Bulea, Thomas C.
2016-01-01
Individuals with cerebral palsy frequently exhibit crouch gait, a pathological walking pattern characterized by excessive knee flexion. Knowledge of the knee joint moment during crouch gait is necessary for the design and control of assistive devices used for treatment. Our goal was to 1) develop statistical models to estimate knee joint moment extrema and dynamic stiffness during crouch gait, and 2) use the models to estimate the instantaneous joint moment during weight-acceptance. We retrospectively computed knee moments from 10 children with crouch gait and used stepwise linear regression to develop statistical models describing the knee moment features. The models explained at least 90% of the response value variability: peak moment in early (99%) and late (90%) stance, and dynamic stiffness of weight-acceptance flexion (94%) and extension (98%). We estimated knee extensor moment profiles from the predicted dynamic stiffness and instantaneous knee angle. This approach captured the timing and shape of the computed moment (root-mean-squared error: 2.64 Nm); including the predicted early-stance peak moment as a correction factor improved model performance (root-mean-squared error: 1.37 Nm). Our strategy provides a practical, accurate method to estimate the knee moment during crouch gait, and could be used for real-time, adaptive control of robotic orthoses. PMID:27101612
Hoover, D R; Peng, Y; Saah, A J; Detels, R R; Day, R S; Phair, J P
A simple non-parametric approach is developed to simultaneously estimate net incidence and morbidity time from specific AIDS illnesses in populations at high risk for death from these illnesses and other causes. The disease-death process has four-stages that can be recast as two sandwiching three-state multiple decrement processes. Non-parametric estimation of net incidence and morbidity time with error bounds are achieved from these sandwiching models through modification of methods from Aalen and Greenwood, and bootstrapping. An application to immunosuppressed HIV-1 infected homosexual men reveals that cytomegalovirus disease, Kaposi's sarcoma and Pneumocystis pneumonia are likely to occur and cause significant morbidity time.
Bénet, Thomas; Voirin, Nicolas; Nicolle, Marie-Christine; Picot, Stephane; Michallet, Mauricette; Vanhems, Philippe
2013-02-01
The duration of the incubation of invasive aspergillosis (IA) remains unknown. The objective of this investigation was to estimate the time interval between aplasia onset and that of IA symptoms in acute myeloid leukemia (AML) patients. A single-centre prospective survey (2004-2009) included all patients with AML and probable/proven IA. Parametric survival models were fitted to the distribution of the time intervals between aplasia onset and IA. Overall, 53 patients had IA after aplasia, with the median observed time interval between the two being 15 days. Based on log-normal distribution, the median estimated IA incubation period was 14.6 days (95% CI; 12.8-16.5 days).
NASA Astrophysics Data System (ADS)
de Lautour, Oliver R.; Omenzetter, Piotr
2010-07-01
Developed for studying long sequences of regularly sampled data, time series analysis methods are being increasingly investigated for the use of Structural Health Monitoring (SHM). In this research, Autoregressive (AR) models were used to fit the acceleration time histories obtained from two experimental structures: a 3-storey bookshelf structure and the ASCE Phase II Experimental SHM Benchmark Structure, in undamaged and limited number of damaged states. The coefficients of the AR models were considered to be damage-sensitive features and used as input into an Artificial Neural Network (ANN). The ANN was trained to classify damage cases or estimate remaining structural stiffness. The results showed that the combination of AR models and ANNs are efficient tools for damage classification and estimation, and perform well using small number of damage-sensitive features and limited sensors.
Predicting Loss-of-Control Boundaries Toward a Piloting Aid
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.
Chen, Ling; Feng, Yanqin; Sun, Jianguo
2017-10-01
This paper discusses regression analysis of clustered failure time data, which occur when the failure times of interest are collected from clusters. In particular, we consider the situation where the correlated failure times of interest may be related to cluster sizes. For inference, we present two estimation procedures, the weighted estimating equation-based method and the within-cluster resampling-based method, when the correlated failure times of interest arise from a class of additive transformation models. The former makes use of the inverse of cluster sizes as weights in the estimating equations, while the latter can be easily implemented by using the existing software packages for right-censored failure time data. An extensive simulation study is conducted and indicates that the proposed approaches work well in both the situations with and without informative cluster size. They are applied to a dental study that motivated this study.
Using Indirect Turbulence Measurements for Real-Time Parameter Estimation in Turbulent Air
NASA Technical Reports Server (NTRS)
Martos, Borja; Morelli, Eugene A.
2012-01-01
The use of indirect turbulence measurements for real-time estimation of parameters in a linear longitudinal dynamics model in atmospheric turbulence was studied. It is shown that measuring the atmospheric turbulence makes it possible to treat the turbulence as a measured explanatory variable in the parameter estimation problem. Commercial off-the-shelf sensors were researched and evaluated, then compared to air data booms. Sources of colored noise in the explanatory variables resulting from typical turbulence measurement techniques were identified and studied. A major source of colored noise in the explanatory variables was identified as frequency dependent upwash and time delay. The resulting upwash and time delay corrections were analyzed and compared to previous time shift dynamic modeling research. Simulation data as well as flight test data in atmospheric turbulence were used to verify the time delay behavior. Recommendations are given for follow on flight research and instrumentation.
Determining prescription durations based on the parametric waiting time distribution.
Støvring, Henrik; Pottegård, Anton; Hallas, Jesper
2016-12-01
The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn
2013-03-06
Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
NASA Astrophysics Data System (ADS)
Begnaud, M. L.; Anderson, D. N.; Phillips, W. S.; Myers, S. C.; Ballard, S.
2016-12-01
The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.
Implicit assimilation for marine ecological models
NASA Astrophysics Data System (ADS)
Weir, B.; Miller, R.; Spitz, Y. H.
2012-12-01
We use a new data assimilation method to estimate the parameters of a marine ecological model. At a given point in the ocean, the estimated values of the parameters determine the behaviors of the modeled planktonic groups, and thus indicate which species are dominant. To begin, we assimilate in situ observations, e.g., the Bermuda Atlantic Time-series Study, the Hawaii Ocean Time-series, and Ocean Weather Station Papa. From there, we estimate the parameters at surrounding points in space based on satellite observations of ocean color. Given the variation of the estimated parameters, we divide the ocean into regions meant to represent distinct ecosystems. An important feature of the data assimilation approach is that it refines the confidence limits of the optimal Gaussian approximation to the distribution of the parameters. This enables us to determine the ecological divisions with greater accuracy.
Bayesian estimation of dynamic matching function for U-V analysis in Japan
NASA Astrophysics Data System (ADS)
Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro
2012-05-01
In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.
Luczak, Susan E; Rosen, I Gary
2014-08-01
Transdermal alcohol sensor (TAS) devices have the potential to allow researchers and clinicians to unobtrusively collect naturalistic drinking data for weeks at a time, but the transdermal alcohol concentration (TAC) data these devices produce do not consistently correspond with breath alcohol concentration (BrAC) data. We present and test the BrAC Estimator software, a program designed to produce individualized estimates of BrAC from TAC data by fitting mathematical models to a specific person wearing a specific TAS device. Two TAS devices were worn simultaneously by 1 participant for 18 days. The trial began with a laboratory alcohol session to calibrate the model and was followed by a field trial with 10 drinking episodes. Model parameter estimates and fit indices were compared across drinking episodes to examine the calibration phase of the software. Software-generated estimates of peak BrAC, time of peak BrAC, and area under the BrAC curve were compared with breath analyzer data to examine the estimation phase of the software. In this single-subject design with breath analyzer peak BrAC scores ranging from 0.013 to 0.057, the software created consistent models for the 2 TAS devices, despite differences in raw TAC data, and was able to compensate for the attenuation of peak BrAC and latency of the time of peak BrAC that are typically observed in TAC data. This software program represents an important initial step for making it possible for non mathematician researchers and clinicians to obtain estimates of BrAC from TAC data in naturalistic drinking environments. Future research with more participants and greater variation in alcohol consumption levels and patterns, as well as examination of gain scheduling calibration procedures and nonlinear models of diffusion, will help to determine how precise these software models can become. Copyright © 2014 by the Research Society on Alcoholism.
Modelling survival: exposure pattern, species sensitivity and uncertainty
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; Van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.
2016-01-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans. PMID:27381500
Lindqvist, R
2006-07-01
Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.
Characterizing subcritical assemblies with time of flight fixed by energy estimation distributions
NASA Astrophysics Data System (ADS)
Monterial, Mateusz; Marleau, Peter; Pozzi, Sara
2018-04-01
We present the Time of Flight Fixed by Energy Estimation (TOFFEE) as a measure of the fission chain dynamics in subcritical assemblies. TOFFEE is the time between correlated gamma rays and neutrons, subtracted by the estimated travel time of the incident neutron from its proton recoil. The measured subcritical assembly was the BeRP ball, a 4.482 kg sphere of α-phase weapons grade plutonium metal, which came in five configurations: bare, 0.5, 1, and 1.5 in iron, and 1 in nickel closed fitting shell reflectors. We extend the measurement with MCNPX-PoliMi simulations of shells ranging up to 6 inches in thickness, and two new reflector materials: aluminum and tungsten. We also simulated the BeRP ball with different masses ranging from 1 to 8 kg. A two-region and single-region point kinetics models were used to model the behavior of the positive side of the TOFFEE distribution from 0 to 100 ns. The single region model of the bare cases gave positive linear correlations between estimated and expected neutron decay constants and leakage multiplications. The two-region model provided a way to estimate neutron multiplication for the reflected cases, which correlated positively with expected multiplication, but the nature of the correlation (sub or superlinear) changed between material types. Finally, we found that the areal density of the reflector shells had a linear correlation with the integral of the two-region model fit. Therefore, we expect that with knowledge of reflector composition, one could determine the shell thickness, or vice versa. Furthermore, up to a certain amount and thickness of the reflector, the two-region model provides a way of distinguishing bare and reflected plutonium assemblies.
Corbett, Elaine A; Perreault, Eric J; Körding, Konrad P
2012-06-01
Neuroprosthetic devices promise to allow paralyzed patients to perform the necessary functions of everyday life. However, to allow patients to use such tools it is necessary to decode their intent from neural signals such as electromyograms (EMGs). Because these signals are noisy, state of the art decoders integrate information over time. One systematic way of doing this is by taking into account the natural evolution of the state of the body--by using a so-called trajectory model. Here we use two insights about movements to enhance our trajectory model: (1) at any given time, there is a small set of likely movement targets, potentially identified by gaze; (2) reaches are produced at varying speeds. We decoded natural reaching movements using EMGs of muscles that might be available from an individual with spinal cord injury. Target estimates found from tracking eye movements were incorporated into the trajectory model, while a mixture model accounted for the inherent uncertainty in these estimates. Warping the trajectory model in time using a continuous estimate of the reach speed enabled accurate decoding of faster reaches. We found that the choice of richer trajectory models, such as those incorporating target or speed, improves decoding particularly when there is a small number of EMGs available.
NASA Astrophysics Data System (ADS)
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
NASA Astrophysics Data System (ADS)
Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel
2016-04-01
Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when the predicted tension time series were within the 95% CI which is derived from the calibration site using DREAM scheme.
Markov modeling of peptide folding in the presence of protein crowders
NASA Astrophysics Data System (ADS)
Nilsson, Daniel; Mohanty, Sandipan; Irbäck, Anders
2018-02-01
We use Markov state models (MSMs) to analyze the dynamics of a β-hairpin-forming peptide in Monte Carlo (MC) simulations with interacting protein crowders, for two different types of crowder proteins [bovine pancreatic trypsin inhibitor (BPTI) and GB1]. In these systems, at the temperature used, the peptide can be folded or unfolded and bound or unbound to crowder molecules. Four or five major free-energy minima can be identified. To estimate the dominant MC relaxation times of the peptide, we build MSMs using a range of different time resolutions or lag times. We show that stable relaxation-time estimates can be obtained from the MSM eigenfunctions through fits to autocorrelation data. The eigenfunctions remain sufficiently accurate to permit stable relaxation-time estimation down to small lag times, at which point simple estimates based on the corresponding eigenvalues have large systematic uncertainties. The presence of the crowders has a stabilizing effect on the peptide, especially with BPTI crowders, which can be attributed to a reduced unfolding rate ku, while the folding rate kf is left largely unchanged.
Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression
Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.
2016-01-01
The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571
A real-time ionospheric model based on GNSS Precise Point Positioning
NASA Astrophysics Data System (ADS)
Tu, Rui; Zhang, Hongping; Ge, Maorong; Huang, Guanwen
2013-09-01
This paper proposes a method of real-time monitoring and modeling the ionospheric Total Electron Content (TEC) by Precise Point Positioning (PPP). Firstly, the ionospheric TEC and receiver’s Differential Code Biases (DCB) are estimated with the undifferenced raw observation in real-time, then the ionospheric TEC model is established based on the Single Layer Model (SLM) assumption and the recovered ionospheric TEC. In this study, phase observations with high precision are directly used instead of phase smoothed code observations. In addition, the DCB estimation is separated from the establishment of the ionospheric model which will limit the impacts of the SLM assumption impacts. The ionospheric model is established at every epoch for real time application. The method is validated with three different GNSS networks on a local, regional, and global basis. The results show that the method is feasible and effective, the real-time ionosphere and DCB results are very consistent with the IGS final products, with a bias of 1-2 TECU and 0.4 ns respectively.
Indoor Residence Times of Semivolatile Organic Compounds: Model Estimation and Field Evaluation
Indoor residence times of semivolatile organic compounds (SVOCs) are a major and mostly unavailable input for residential exposure assessment. We calculated residence times for a suite of SVOCs using a fugacity model applied to residential environments. Residence times depend on...
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
NASA Technical Reports Server (NTRS)
Ratnayake, Nalin A.; Waggoner, Erin R.; Taylor, Brian R.
2011-01-01
The problem of parameter estimation on hybrid-wing-body aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aerodynamic control effectors that act in coplanar motion. This adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of flight and simulation data must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, time-decorrelation techniques are applied to a model structure selected through stepwise regression for simulated and flight-generated lateral-directional parameter estimation data. A virtual effector model that uses mathematical abstractions to describe the multi-axis effects of clamshell surfaces is developed and applied. Comparisons are made between time history reconstructions and observed data in order to assess the accuracy of the regression model. The Cram r-Rao lower bounds of the estimated parameters are used to assess the uncertainty of the regression model relative to alternative models. Stepwise regression was found to be a useful technique for lateral-directional model design for hybrid-wing-body aircraft, as suggested by available flight data. Based on the results of this study, linear regression parameter estimation methods using abstracted effectors are expected to perform well for hybrid-wing-body aircraft properly equipped for the task.
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
Madi, Mahmoud K; Karameh, Fadi N
2017-01-01
Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements.
2017-01-01
Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate under CD-CKF. In conclusion, and with the CKF recently benchmarked against other advanced Bayesian techniques, the CD-CKF framework could provide significant gains in robustness and accuracy when estimating a variety of biological phenomena models where the underlying process dynamics unfold at time scales faster than those seen in collected measurements. PMID:28727850
Estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers.
Li, Shanshan; Ning, Yang
2015-09-01
Covariate-specific time-dependent ROC curves are often used to evaluate the diagnostic accuracy of a biomarker with time-to-event outcomes, when certain covariates have an impact on the test accuracy. In many medical studies, measurements of biomarkers are subject to missingness due to high cost or limitation of technology. This article considers estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers. To incorporate the covariate effect, we assume a proportional hazards model for the failure time given the biomarker and the covariates, and a semiparametric location model for the biomarker given the covariates. In the presence of missing biomarkers, we propose a simple weighted estimator for the ROC curves where the weights are inversely proportional to the selection probability. We also propose an augmented weighted estimator which utilizes information from the subjects with missing biomarkers. The augmented weighted estimator enjoys the double-robustness property in the sense that the estimator remains consistent if either the missing data process or the conditional distribution of the missing data given the observed data is correctly specified. We derive the large sample properties of the proposed estimators and evaluate their finite sample performance using numerical studies. The proposed approaches are illustrated using the US Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. © 2015, The International Biometric Society.
Determination of Time Dependent Virus Inactivation Rates
NASA Astrophysics Data System (ADS)
Chrysikopoulos, C. V.; Vogler, E. T.
2003-12-01
A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.
Calabro, Finnegan J.; Beardsley, Scott A.; Vaina, Lucia M.
2012-01-01
Estimation of time-to-arrival for moving objects is critical to obstacle interception and avoidance, as well as to timing actions such as reaching and grasping moving objects. The source of motion information that conveys arrival time varies with the trajectory of the object raising the question of whether multiple context-dependent mechanisms are involved in this computation. To address this question we conducted a series of psychophysical studies to measure observers’ performance on time-to-arrival estimation when object trajectory was specified by angular motion (“gap closure” trajectories in the frontoparallel plane), looming (colliding trajectories, TTC) or both (passage courses, TTP). We measured performance of time-to-arrival judgments in the presence of irrelevant motion, in which a perpendicular motion vector was added to the object trajectory. Data were compared to models of expected performance based on the use of different components of optical information. Our results demonstrate that for gap closure, performance depended only on the angular motion, whereas for TTC and TTP, both angular and looming motion affected performance. This dissociation of inputs suggests that gap closures are mediated by a separate mechanism than that used for the detection of time-to-collision and time-to-passage. We show that existing models of TTC and TTP estimation make systematic errors in predicting subject performance, and suggest that a model which weights motion cues by their relative time-to-arrival provides a better account of performance. PMID:22056519
NASA Astrophysics Data System (ADS)
Wei, Zhongbao; Meng, Shujuan; Tseng, King Jet; Lim, Tuti Mariana; Soong, Boon Hee; Skyllas-Kazacos, Maria
2017-03-01
An accurate battery model is the prerequisite for reliable state estimate of vanadium redox battery (VRB). As the battery model parameters are time varying with operating condition variation and battery aging, the common methods where model parameters are empirical or prescribed offline lacks accuracy and robustness. To address this issue, this paper proposes to use an online adaptive battery model to reproduce the VRB dynamics accurately. The model parameters are online identified with both the recursive least squares (RLS) and the extended Kalman filter (EKF). Performance comparison shows that the RLS is superior with respect to the modeling accuracy, convergence property, and computational complexity. Based on the online identified battery model, an adaptive peak power estimator which incorporates the constraints of voltage limit, SOC limit and design limit of current is proposed to fully exploit the potential of the VRB. Experiments are conducted on a lab-scale VRB system and the proposed peak power estimator is verified with a specifically designed "two-step verification" method. It is shown that different constraints dominate the allowable peak power at different stages of cycling. The influence of prediction time horizon selection on the peak power is also analyzed.
Modeling avian abundance from replicated counts using binomial mixture models
Kery, Marc; Royle, J. Andrew; Schmid, Hans
2005-01-01
Abundance estimation in ecology is usually accomplished by capture–recapture, removal, or distance sampling methods. These may be hard to implement at large spatial scales. In contrast, binomial mixture models enable abundance estimation without individual identification, based simply on temporally and spatially replicated counts. Here, we evaluate mixture models using data from the national breeding bird monitoring program in Switzerland, where some 250 1-km2 quadrats are surveyed using the territory mapping method three times during each breeding season. We chose eight species with contrasting distribution (wide–narrow), abundance (high–low), and detectability (easy–difficult). Abundance was modeled as a random effect with a Poisson or negative binomial distribution, with mean affected by forest cover, elevation, and route length. Detectability was a logit-linear function of survey date, survey date-by-elevation, and sampling effort (time per transect unit). Resulting covariate effects and parameter estimates were consistent with expectations. Detectability per territory (for three surveys) ranged from 0.66 to 0.94 (mean 0.84) for easy species, and from 0.16 to 0.83 (mean 0.53) for difficult species, depended on survey effort for two easy and all four difficult species, and changed seasonally for three easy and three difficult species. Abundance was positively related to route length in three high-abundance and one low-abundance (one easy and three difficult) species, and increased with forest cover in five forest species, decreased for two nonforest species, and was unaffected for a generalist species. Abundance estimates under the most parsimonious mixture models were between 1.1 and 8.9 (median 1.8) times greater than estimates based on territory mapping; hence, three surveys were insufficient to detect all territories for each species. We conclude that binomial mixture models are an important new approach for estimating abundance corrected for detectability when only repeated-count data are available. Future developments envisioned include estimation of trend, occupancy, and total regional abundance.
NASA Astrophysics Data System (ADS)
Zhou, Tao; Luo, Yiqi
2008-09-01
Ecosystem carbon (C) uptake is determined largely by C residence times and increases in net primary production (NPP). Therefore, evaluation of C uptake at a regional scale requires knowledge on spatial patterns of both residence times and NPP increases. In this study, we first applied an inverse modeling method to estimate spatial patterns of C residence times in the conterminous United States. Then we combined the spatial patterns of estimated residence times with a NPP change trend to assess the spatial patterns of regional C uptake in the United States. The inverse analysis was done by using the genetic algorithm and was based on 12 observed data sets of C pools and fluxes. Residence times were estimated by minimizing the total deviation between modeled and observed values. Our results showed that the estimated C residence times were highly heterogeneous over the conterminous United States, with most of the regions having values between 15 and 65 years; and the averaged C residence time was 46 years. The estimated C uptake for the whole conterminous United States was 0.15 P g C a-1. Large portions of the taken C were stored in soil for grassland and cropland (47-70%) but in plant pools for forests and woodlands (73-82%). The proportion of C uptake in soil was found to be determined primarily by C residence times and be independent of the magnitude of NPP increase. Therefore, accurate estimation of spatial patterns of C residence times is crucial for the evaluation of terrestrial ecosystem C uptake.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Biegalski, S.; Bowyer, Ted W.
2014-01-01
Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout from the Fukushima Daiichi nuclear accident in March 2011. Atmospheric transport modeling (ATM) of plumes of noble gases and particulates were performed soon after the accident to determine plausible detection locations of any radioactive releases to the atmosphere. We combine sampling data from multiple International Modeling System (IMS) locations in a new way to estimate the magnitude and time sequence of the releases. Dilution factors from the modeled plume at five different detection locations were combined with 57 atmospheric concentration measurements of 133-Xe taken from Marchmore » 18 to March 23 to estimate the source term. This approach estimates that 59% of the 1.24×1019 Bq of 133-Xe present in the reactors at the time of the earthquake was released to the atmosphere over a three day period. Source term estimates from combinations of detection sites have lower spread than estimates based on measurements at single detection sites. Sensitivity cases based on data from four or more detection locations bound the source term between 35% and 255% of available xenon inventory.« less
Covariate analysis of bivariate survival data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
Modelling breast cancer tumour growth for a stable disease population.
Isheden, Gabriel; Humphreys, Keith
2017-01-01
Statistical models of breast cancer tumour progression have been used to further our knowledge of the natural history of breast cancer, to evaluate mammography screening in terms of mortality, to estimate overdiagnosis, and to estimate the impact of lead-time bias when comparing survival times between screen detected cancers and cancers found outside of screening programs. Multi-state Markov models have been widely used, but several research groups have proposed other modelling frameworks based on specifying an underlying biological continuous tumour growth process. These continuous models offer some advantages over multi-state models and have been used, for example, to quantify screening sensitivity in terms of mammographic density, and to quantify the effect of body size covariates on tumour growth and time to symptomatic detection. As of yet, however, the continuous tumour growth models are not sufficiently developed and require extensive computing to obtain parameter estimates. In this article, we provide a detailed description of the underlying assumptions of the continuous tumour growth model, derive new theoretical results for the model, and show how these results may help the development of this modelling framework. In illustrating the approach, we develop a model for mammography screening sensitivity, using a sample of 1901 post-menopausal women diagnosed with invasive breast cancer.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
Ren, Yan; Yang, Min; Li, Qian; Pan, Jay; Chen, Fei; Li, Xiaosong; Meng, Qun
2017-02-22
To introduce multilevel repeated measures (RM) models and compare them with multilevel difference-in-differences (DID) models in assessing the linear relationship between the length of the policy intervention period and healthcare outcomes (dose-response effect) for data from a stepped-wedge design with a hierarchical structure. The implementation of national essential medicine policy (NEMP) in China was a stepped-wedge-like design of five time points with a hierarchical structure. Using one key healthcare outcome from the national NEMP surveillance data as an example, we illustrate how a series of multilevel DID models and one multilevel RM model can be fitted to answer some research questions on policy effects. Routinely and annually collected national data on China from 2008 to 2012. 34 506 primary healthcare facilities in 2675 counties of 31 provinces. Agreement and differences in estimates of dose-response effect and variation in such effect between the two methods on the logarithm-transformed total number of outpatient visits per facility per year (LG-OPV). The estimated dose-response effect was approximately 0.015 according to four multilevel DID models and precisely 0.012 from one multilevel RM model. Both types of model estimated an increase in LG-OPV by 2.55 times from 2009 to 2012, but 2-4.3 times larger SEs of those estimates were found by the multilevel DID models. Similar estimates of mean effects of covariates and random effects of the average LG-OPV among all levels in the example dataset were obtained by both types of model. Significant variances in the dose-response among provinces, counties and facilities were estimated, and the 'lowest' or 'highest' units by their dose-response effects were pinpointed only by the multilevel RM model. For examining dose-response effect based on data from multiple time points with hierarchical structure and the stepped wedge-like designs, multilevel RM models are more efficient, convenient and informative than the multilevel DID models. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
An M-estimator for reduced-rank system identification.
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T
2017-01-15
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.
An M-estimator for reduced-rank system identification
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.
2018-01-01
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659
Zhu, Tianqi; Dos Reis, Mario; Yang, Ziheng
2015-03-01
Genetic sequence data provide information about the distances between species or branch lengths in a phylogeny, but not about the absolute divergence times or the evolutionary rates directly. Bayesian methods for dating species divergences estimate times and rates by assigning priors on them. In particular, the prior on times (node ages on the phylogeny) incorporates information in the fossil record to calibrate the molecular tree. Because times and rates are confounded, our posterior time estimates will not approach point values even if an infinite amount of sequence data are used in the analysis. In a previous study we developed a finite-sites theory to characterize the uncertainty in Bayesian divergence time estimation in analysis of large but finite sequence data sets under a strict molecular clock. As most modern clock dating analyses use more than one locus and are conducted under relaxed clock models, here we extend the theory to the case of relaxed clock analysis of data from multiple loci (site partitions). Uncertainty in posterior time estimates is partitioned into three sources: Sampling errors in the estimates of branch lengths in the tree for each locus due to limited sequence length, variation of substitution rates among lineages and among loci, and uncertainty in fossil calibrations. Using a simple but analogous estimation problem involving the multivariate normal distribution, we predict that as the number of loci ([Formula: see text]) goes to infinity, the variance in posterior time estimates decreases and approaches the infinite-data limit at the rate of 1/[Formula: see text], and the limit is independent of the number of sites in the sequence alignment. We then confirmed the predictions by using computer simulation on phylogenies of two or three species, and by analyzing a real genomic data set for six primate species. Our results suggest that with the fossil calibrations fixed, analyzing multiple loci or site partitions is the most effective way for improving the precision of posterior time estimation. However, even if a huge amount of sequence data is analyzed, considerable uncertainty will persist in time estimates. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society of Systematic Biologists.
Markov switching multinomial logit model: An application to accident-injury severities.
Malyshkina, Nataliya V; Mannering, Fred L
2009-07-01
In this study, two-state Markov switching multinomial logit models are proposed for statistical modeling of accident-injury severities. These models assume Markov switching over time between two unobserved states of roadway safety as a means of accounting for potential unobserved heterogeneity. The states are distinct in the sense that in different states accident-severity outcomes are generated by separate multinomial logit processes. To demonstrate the applicability of the approach, two-state Markov switching multinomial logit models are estimated for severity outcomes of accidents occurring on Indiana roads over a four-year time period. Bayesian inference methods and Markov Chain Monte Carlo (MCMC) simulations are used for model estimation. The estimated Markov switching models result in a superior statistical fit relative to the standard (single-state) multinomial logit models for a number of roadway classes and accident types. It is found that the more frequent state of roadway safety is correlated with better weather conditions and that the less frequent state is correlated with adverse weather conditions.
Gold, Heather Taffet; Sorbero, Melony E. S.; Griggs, Jennifer J.; Do, Huong T.; Dick, Andrew W.
2013-01-01
Analysis of observational cohort data is subject to bias from unobservable risk selection. We compared econometric models and treatment effectiveness estimates using the linked Surveillance, Epidemiology, and End Results (SEER)-Medicare claims data for women diagnosed with ductal carcinoma in situ. Treatment effectiveness estimates for mastectomy and breast conserving surgery (BCS) with or without radiotherapy were compared using three different models: simultaneous-equations model, discrete-time survival model with unobserved heterogeneity (frailty), and proportional hazards model. Overall trends in disease-free survival (DFS), or time to first subsequent breast event, by treatment are similar regardless of the model, with mastectomy yielding the highest DFS over 8 years of follow-up, followed by BCS with radiotherapy, and then BCS alone. Absolute rates and direction of bias varied substantially by treatment strategy. DFS was underestimated by single-equation and frailty models compared to the simultaneous-equations model and RCT results for BCS with RT and overestimated for BCS alone. PMID:21602195
NASA Astrophysics Data System (ADS)
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
Updating stand-level forest inventories using airborne laser scanning and Landsat time series data
NASA Astrophysics Data System (ADS)
Bolton, Douglas K.; White, Joanne C.; Wulder, Michael A.; Coops, Nicholas C.; Hermosilla, Txomin; Yuan, Xiaoping
2018-04-01
Vertical forest structure can be mapped over large areas by combining samples of airborne laser scanning (ALS) data with wall-to-wall spatial data, such as Landsat imagery. Here, we use samples of ALS data and Landsat time-series metrics to produce estimates of top height, basal area, and net stem volume for two timber supply areas near Kamloops, British Columbia, Canada, using an imputation approach. Both single-year and time series metrics were calculated from annual, gap-free Landsat reflectance composites representing 1984-2014. Metrics included long-term means of vegetation indices, as well as measures of the variance and slope of the indices through time. Terrain metrics, generated from a 30 m digital elevation model, were also included as predictors. We found that imputation models improved with the inclusion of Landsat time series metrics when compared to single-year Landsat metrics (relative RMSE decreased from 22.8% to 16.5% for top height, from 32.1% to 23.3% for basal area, and from 45.6% to 34.1% for net stem volume). Landsat metrics that characterized 30-years of stand history resulted in more accurate models (for all three structural attributes) than Landsat metrics that characterized only the most recent 10 or 20 years of stand history. To test model transferability, we compared imputed attributes against ALS-based estimates in nearby forest blocks (>150,000 ha) that were not included in model training or testing. Landsat-imputed attributes correlated strongly to ALS-based estimates in these blocks (R2 = 0.62 and relative RMSE = 13.1% for top height, R2 = 0.75 and relative RMSE = 17.8% for basal area, and R2 = 0.67 and relative RMSE = 26.5% for net stem volume), indicating model transferability. These findings suggest that in areas containing spatially-limited ALS data acquisitions, imputation models, and Landsat time series and terrain metrics can be effectively used to produce wall-to-wall estimates of key inventory attributes, providing an opportunity to update estimates of forest attributes in areas where inventory information is either out of date or non-existent.
2013-01-01
Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171
Estimating rates of local species extinction, colonization and turnover in animal communities
Nichols, James D.; Boulinier, T.; Hines, J.E.; Pollock, K.H.; Sauer, J.R.
1998-01-01
Species richness has been identified as a useful state variable for conservation and management purposes. Changes in richness over time provide a basis for predicting and evaluating community responses to management, to natural disturbance, and to changes in factors such as community composition (e.g., the removal of a keystone species). Probabilistic capture-recapture models have been used recently to estimate species richness from species count and presence-absence data. These models do not require the common assumption that all species are detected in sampling efforts. We extend this approach to the development of estimators useful for studying the vital rates responsible for changes in animal communities over time; rates of local species extinction, turnover, and colonization. Our approach to estimation is based on capture-recapture models for closed animal populations that permit heterogeneity in detection probabilities among the different species in the sampled community. We have developed a computer program, COMDYN, to compute many of these estimators and associated bootstrap variances. Analyses using data from the North American Breeding Bird Survey (BBS) suggested that the estimators performed reasonably well. We recommend estimators based on probabilistic modeling for future work on community responses to management efforts as well as on basic questions about community dynamics.
NASA Astrophysics Data System (ADS)
Zhu, Jing; Zhou, Zebo; Li, Yong; Rizos, Chris; Wang, Xingshu
2016-07-01
An improvement of the attitude difference method (ADM) to estimate deflections of the vertical (DOV) in real time is described in this paper. The ADM without offline processing estimates the DOV with a limited accuracy due to the response delay. The proposed model selection-based self-adaptive delay feedback (SDF) method takes the results of the ADM as the a priori information, then uses fitting and extrapolation to estimate the DOV at the current epoch. The active region selection factor F th is used to take full advantage of the Earth model EGM2008 and the SDF with different DOV exhibitions. The factors which affect the DOV estimation accuracy are analyzed and modeled. An external observation which is specified by the velocity difference between the global navigation satellite system (GNSS) and the inertial navigation system (INS) with DOV compensated is used to select the optimal model. The response delay induced by the weak observability of an integrated INS/GNSS to the violent DOV disturbances in the ADM is compensated. The DOV estimation accuracy of the SDF method is improved by approximately 40% and 50% respectively compared to that of the EGM2008 and the ADM. With an increase in GNSS accuracy, the DOV estimation accuracy could improve further.
Shanafield, Margaret; Niswonger, Richard G.; Prudic, David E.; Pohll, Greg; Susfalk, Richard; Panday, Sorab
2014-01-01
Infiltration along ephemeral channels plays an important role in groundwater recharge in arid regions. A model is presented for estimating spatial variability of seepage due to streambed heterogeneity along channels based on measurements of streamflow-front velocities in initially dry channels. The diffusion-wave approximation to the Saint-Venant equations, coupled with Philip's equation for infiltration, is connected to the groundwater model MODFLOW and is calibrated by adjusting the saturated hydraulic conductivity of the channel bed. The model is applied to portions of two large water delivery canals, which serve as proxies for natural ephemeral streams. Estimated seepage rates compare well with previously published values. Possible sources of error stem from uncertainty in Manning's roughness coefficients, soil hydraulic properties and channel geometry. Model performance would be most improved through more frequent longitudinal estimates of channel geometry and thalweg elevation, and with measurements of stream stage over time to constrain wave timing and shape. This model is a potentially valuable tool for estimating spatial variability in longitudinal seepage along intermittent and ephemeral channels over a wide range of bed slopes and the influence of seepage rates on groundwater levels.
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel
2011-01-01
Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.
Koral, Kenneth F.; Avram, Anca M.; Kaminski, Mark S.; Dewaraja, Yuni K.
2012-01-01
Abstract Background For individualized treatment planning in radioimmunotherapy (RIT), correlations must be established between tracer-predicted and therapy-delivered absorbed doses. The focus of this work was to investigate this correlation for tumors. Methods The study analyzed 57 tumors in 19 follicular lymphoma patients treated with I-131 tositumomab and imaged with SPECT/CT multiple times after tracer and therapy administrations. Instead of the typical least-squares fit to a single tumor's measured time-activity data, estimation was accomplished via a biexponential mixed model in which the curves from multiple subjects were jointly estimated. The tumor-absorbed dose estimates were determined by patient-specific Monte Carlo calculation. Results The mixed model gave realistic tumor time-activity fits that showed the expected uptake and clearance phases even with noisy data or missing time points. Correlation between tracer and therapy tumor-residence times (r=0.98; p<0.0001) and correlation between tracer-predicted and therapy-delivered mean tumor-absorbed doses (r=0.86; p<0.0001) were very high. The predicted and delivered absorbed doses were within±25% (or within±75 cGy) for 80% of tumors. Conclusions The mixed-model approach is feasible for fitting tumor time-activity data in RIT treatment planning when individual least-squares fitting is not possible due to inadequate sampling points. The good correlation between predicted and delivered tumor doses demonstrates the potential of using a pretherapy tracer study for tumor dosimetry-based treatment planning in RIT. PMID:22947086
Survival and recovery rates of American woodcock banded in Michigan
Krementz, David G.; Hines, James E.; Luukkonen, David R.
2003-01-01
American woodcock (Scolopax minor) population indices have declined since U.S. Fish and Wildlife Service (USFWS) monitoring began in 1968. Management to stop and/or reverse this population trend has been hampered by the lack of recent information on woodcock population parameters. Without recent information on survival rate trends, managers have had to assume that the recent declines in recruitment indices are the only parameter driving woodcock declines. Using program MARK, we estimated annual survival and recovery rates of adult and juvenile American woodcock, and estimated summer survival of local (young incapable of sustained flight) woodcock banded in Michigan between 1978 and 1998. We constructed a set of candidate models from a global model with age (local, juvenile, adult) and time (year)-dependent survival and recovery rates to no age or time-dependent survival and recovery rates. Five models were supported by the data, with all models suggesting that survival rates differed among age classes, and 4 models had survival rates that were constant over time. The fifth model suggested that juvenile and adult survival rates were linear on a logit scale over time. Survival rates averaged over likelihood-weighted model results were 0.8784 +/- 0.1048 (SE) for locals, 0.2646 +/- 0.0423 (SE) for juveniles, and 0.4898 +/- 0.0329 (SE) for adults. Weighted average recovery rates were 0.0326 +/- 0.0053 (SE) for juveniles and 0.0313 +/- 0.0047 (SE) for adults. Estimated differences between our survival estimates and those from prior years were small, and our confidence around those differences was variable and uncertain. juvenile survival rates were low.
Parameter estimation in tree graph metabolic networks.
Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J
2016-01-01
We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.
Time Series Decomposition into Oscillation Components and Phase Estimation.
Matsuda, Takeru; Komaki, Fumiyasu
2017-02-01
Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.
NASA Astrophysics Data System (ADS)
Paiva, Rodrigo C. D.; Durand, Michael T.; Hossain, Faisal
2015-01-01
Recent efforts have sought to estimate river discharge and other surface water-related quantities using spaceborne sensors, with better spatial coverage but worse temporal sampling as compared with in situ measurements. The Surface Water and Ocean Topography (SWOT) mission will provide river discharge estimates globally from space. However, questions on how to optimally use the spatially distributed but asynchronous satellite observations to generate continuous fields still exist. This paper presents a statistical model (River Kriging-RK), for estimating discharge time series in a river network in the context of the SWOT mission. RK uses discharge estimates at different locations and times to produce a continuous field using spatiotemporal kriging. A key component of RK is the space-time river discharge covariance, which was derived analytically from the diffusive wave approximation of Saint Venant's equations. The RK covariance also accounts for the loss of correlation at confluences. The model performed well in a case study on Ganges-Brahmaputra-Meghna (GBM) River system in Bangladesh using synthetic SWOT observations. The correlation model reproduced empirically derived values. RK (R2=0.83) outperformed other kriging-based methods (R2=0.80), as well as a simple time series linear interpolation (R2=0.72). RK was used to combine discharge from SWOT and in situ observations, improving estimates when the latter is included (R2=0.91). The proposed statistical concepts may eventually provide a feasible framework to estimate continuous discharge time series across a river network based on SWOT data, other altimetry missions, and/or in situ data.
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
The radial speed-expansion speed relation for Earth-directed CMEs
NASA Astrophysics Data System (ADS)
Mäkelä, P.; Gopalswamy, N.; Yashiro, S.
2016-05-01
Earth-directed coronal mass ejections (CMEs) are the main drivers of major geomagnetic storms. Therefore, a good estimate of the disturbance arrival time at Earth is required for space weather predictions. The STEREO and SOHO spacecraft were viewing the Sun in near quadrature during January 2010 to September 2012, providing a unique opportunity to study the radial speed (Vrad)-expansion speed (Vexp) relationship of Earth-directed CMEs. This relationship is useful in estimating the Vrad of Earth-directed CMEs, when they are observed from Earth view only. We selected 19 Earth-directed CMEs observed by the Large Angle and Spectrometric Coronagraph (LASCO)/C3 coronagraph on SOHO and the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI)/COR2 coronagraph on STEREO during January 2010 to September 2012. We found that of the three tested geometric CME models the full ice-cream cone model of the CME describes best the Vrad-Vexp relationship, as suggested by earlier investigations. We also tested the prediction accuracy of the empirical shock arrival (ESA) model proposed by Gopalswamy et al. (2005a), while estimating the CME propagation speeds from the CME expansion speeds. If we use STEREO observations to estimate the CME width required to calculate the Vrad from the Vexp measurements, the mean absolute error (MAE) of the shock arrival times of the ESA model is 8.4 h. If the LASCO measurements are used to estimate the CME width, the MAE still remains below 17 h. Therefore, by using the simple Vrad-Vexp relationship to estimate the Vrad of the Earth-directed CMEs, the ESA model is able to predict the shock arrival times with accuracy comparable to most other more complex models.
Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE
NASA Astrophysics Data System (ADS)
Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.
2015-12-01
Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.
Online Estimation of Model Parameters of Lithium-Ion Battery Using the Cubature Kalman Filter
NASA Astrophysics Data System (ADS)
Tian, Yong; Yan, Rusheng; Tian, Jindong; Zhou, Shijie; Hu, Chao
2017-11-01
Online estimation of state variables, including state-of-charge (SOC), state-of-energy (SOE) and state-of-health (SOH) is greatly crucial for the operation safety of lithium-ion battery. In order to improve estimation accuracy of these state variables, a precise battery model needs to be established. As the lithium-ion battery is a nonlinear time-varying system, the model parameters significantly vary with many factors, such as ambient temperature, discharge rate and depth of discharge, etc. This paper presents an online estimation method of model parameters for lithium-ion battery based on the cubature Kalman filter. The commonly used first-order resistor-capacitor equivalent circuit model is selected as the battery model, based on which the model parameters are estimated online. Experimental results show that the presented method can accurately track the parameters variation at different scenarios.
Methods for determining time of death.
Madea, Burkhard
2016-12-01
Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.
Adaptive optimal stochastic state feedback control of resistive wall modes in tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2006-01-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least-square method with exponential forgetting factor and covariance resetting is used to identify (experimentally determine) the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time-dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
Adaptive Optimal Stochastic State Feedback Control of Resistive Wall Modes in Tokamaks
NASA Astrophysics Data System (ADS)
Sun, Z.; Sen, A. K.; Longman, R. W.
2007-06-01
An adaptive optimal stochastic state feedback control is developed to stabilize the resistive wall mode (RWM) instability in tokamaks. The extended least square method with exponential forgetting factor and covariance resetting is used to identify the time-varying stochastic system model. A Kalman filter is used to estimate the system states. The estimated system states are passed on to an optimal state feedback controller to construct control inputs. The Kalman filter and the optimal state feedback controller are periodically redesigned online based on the identified system model. This adaptive controller can stabilize the time dependent RWM in a slowly evolving tokamak discharge. This is accomplished within a time delay of roughly four times the inverse of the growth rate for the time-invariant model used.
Stochastic modeling for time series InSAR: with emphasis on atmospheric effects
NASA Astrophysics Data System (ADS)
Cao, Yunmeng; Li, Zhiwei; Wei, Jianchao; Hu, Jun; Duan, Meng; Feng, Guangcai
2018-02-01
Despite the many applications of time series interferometric synthetic aperture radar (TS-InSAR) techniques in geophysical problems, error analysis and assessment have been largely overlooked. Tropospheric propagation error is still the dominant error source of InSAR observations. However, the spatiotemporal variation of atmospheric effects is seldom considered in the present standard TS-InSAR techniques, such as persistent scatterer interferometry and small baseline subset interferometry. The failure to consider the stochastic properties of atmospheric effects not only affects the accuracy of the estimators, but also makes it difficult to assess the uncertainty of the final geophysical results. To address this issue, this paper proposes a network-based variance-covariance estimation method to model the spatiotemporal variation of tropospheric signals, and to estimate the temporal variance-covariance matrix of TS-InSAR observations. The constructed stochastic model is then incorporated into the TS-InSAR estimators both for parameters (e.g., deformation velocity, topography residual) estimation and uncertainty assessment. It is an incremental and positive improvement to the traditional weighted least squares methods to solve the multitemporal InSAR time series. The performance of the proposed method is validated by using both simulated and real datasets.
Røislien, Jo; Clausen, Thomas; Gran, Jon Michael; Bukten, Anne
2014-05-17
The reduction of crime is an important outcome of opioid maintenance treatment (OMT). Criminal intensity and treatment regimes vary among OMT patients, but this is rarely adjusted for in statistical analyses, which tend to focus on cohort incidence rates and rate ratios. The purpose of this work was to estimate the relationship between treatment and criminal convictions among OMT patients, adjusting for individual covariate information and timing of events, fitting time-to-event regression models of increasing complexity. National criminal records were cross linked with treatment data on 3221 patients starting OMT in Norway 1997-2003. In addition to calculating cohort incidence rates, criminal convictions was modelled as a recurrent event dependent variable, and treatment a time-dependent covariate, in Cox proportional hazards, Aalen's additive hazards, and semi-parametric additive hazards regression models. Both fixed and dynamic covariates were included. During OMT, the number of days with criminal convictions for the cohort as a whole was 61% lower than when not in treatment. OMT was associated with reduced number of days with criminal convictions in all time-to-event regression models, but the hazard ratio (95% CI) was strongly attenuated when adjusting for covariates; from 0.40 (0.35, 0.45) in a univariate model to 0.79 (0.72, 0.87) in a fully adjusted model. The hazard was lower for females and decreasing with older age, while increasing with high numbers of criminal convictions prior to application to OMT (all p < 0.001). The strongest predictors were level of criminal activity prior to entering into OMT, and having a recent criminal conviction (both p < 0.001). The effect of several predictors was significantly time-varying with their effects diminishing over time. Analyzing complex observational data regarding to fixed factors only overlooks important temporal information, and naïve cohort level incidence rates might result in biased estimates of the effect of interventions. Applying time-to-event regression models, properly adjusting for individual covariate information and timing of various events, allows for more precise and reliable effect estimates, as well as painting a more nuanced picture that can aid health care professionals and policy makers.
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen
2018-06-01
In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).
The relative effect of noise at different times of day: An analysis of existing survey data
NASA Technical Reports Server (NTRS)
Fields, J. M.
1986-01-01
This report examines survey evidence on the relative impact of noise at different times of day and assesses the survey methodology which produces that evidence. Analyses of the regression of overall (24-hour) annoyance on noise levels in different time periods can provide direct estimates of the value of the parameters in human reaction models which are used in environmental noise indices such as LDN and CNEL. In this report these analyses are based on the original computer tapes containing the responses of 22,000 respondents from ten studies of response to noise in residential areas. The estimates derived from these analyses are found to be so inaccurate that they do not provide useful information for policy or scientific purposes. The possibility that the type of questionnaire item could be biasing the estimates of the time-of-day weightings is considered but not supported by the data. Two alternatives to the conventional noise reaction model (adjusted energy model) are considered but not supported by the data.
Lin, Feng-Chang; Zhu, Jun
2012-01-01
We develop continuous-time models for the analysis of environmental or ecological monitoring data such that subjects are observed at multiple monitoring time points across space. Of particular interest are additive hazards regression models where the baseline hazard function can take on flexible forms. We consider time-varying covariates and take into account spatial dependence via autoregression in space and time. We develop statistical inference for the regression coefficients via partial likelihood. Asymptotic properties, including consistency and asymptotic normality, are established for parameter estimates under suitable regularity conditions. Feasible algorithms utilizing existing statistical software packages are developed for computation. We also consider a simpler additive hazards model with homogeneous baseline hazard and develop hypothesis testing for homogeneity. A simulation study demonstrates that the statistical inference using partial likelihood has sound finite-sample properties and offers a viable alternative to maximum likelihood estimation. For illustration, we analyze data from an ecological study that monitors bark beetle colonization of red pines in a plantation of Wisconsin.
The relative effect of noise at different times of day: An analysis of existing survey data
NASA Astrophysics Data System (ADS)
Fields, J. M.
1986-04-01
This report examines survey evidence on the relative impact of noise at different times of day and assesses the survey methodology which produces that evidence. Analyses of the regression of overall (24-hour) annoyance on noise levels in different time periods can provide direct estimates of the value of the parameters in human reaction models which are used in environmental noise indices such as LDN and CNEL. In this report these analyses are based on the original computer tapes containing the responses of 22,000 respondents from ten studies of response to noise in residential areas. The estimates derived from these analyses are found to be so inaccurate that they do not provide useful information for policy or scientific purposes. The possibility that the type of questionnaire item could be biasing the estimates of the time-of-day weightings is considered but not supported by the data. Two alternatives to the conventional noise reaction model (adjusted energy model) are considered but not supported by the data.
McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A
2016-08-01
Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.
Langtimm, Catherine A.
2008-01-01
Knowing the extent and magnitude of the potential bias can help in making decisions as to what time frame provides the best estimates or the most reliable opportunity to model and test hypotheses about factors affecting survival probability. To assess bias, truncating the capture histories to shorter time frames and reanalyzing the data to compare time-specific estimates may help identify spurious effects. Running simulations that mimic the parameter values and movement conditions in the real situation can provide estimates of standardized bias that can be used to identify those annual estimates that are biased to the point where the 95% confidence intervals are inadequate in describing the uncertainty of the estimates.
Analytical model for real time, noninvasive estimation of blood glucose level.
Adhyapak, Anoop; Sidley, Matthew; Venkataraman, Jayanti
2014-01-01
The paper presents an analytical model to estimate blood glucose level from measurements made non-invasively and in real time by an antenna strapped to a patient's wrist. Some promising success has been shown by the RIT ETA Lab research group that an antenna's resonant frequency can track, in real time, changes in glucose concentration. Based on an in-vitro study of blood samples of diabetic patients, the paper presents a modified Cole-Cole model that incorporates a factor to represent the change in glucose level. A calibration technique using the input impedance technique is discussed and the results show a good estimation as compared to the glucose meter readings. An alternate calibration methodology has been developed that is based on the shift in the antenna resonant frequency using an equivalent circuit model containing a shunt capacitor to represent the shift in resonant frequency with changing glucose levels. Work under progress is the optimization of the technique with a larger sample of patients.
NASA Astrophysics Data System (ADS)
Zhang, Kai; Batterman, Stuart
2010-05-01
The contribution of vehicular traffic to air pollutant concentrations is often difficult to establish. This paper utilizes both time-series and simulation models to estimate vehicle contributions to pollutant levels near roadways. The time-series model used generalized additive models (GAMs) and fitted pollutant observations to traffic counts and meteorological variables. A one year period (2004) was analyzed on a seasonal basis using hourly measurements of carbon monoxide (CO) and particulate matter less than 2.5 μm in diameter (PM 2.5) monitored near a major highway in Detroit, Michigan, along with hourly traffic counts and local meteorological data. Traffic counts showed statistically significant and approximately linear relationships with CO concentrations in fall, and piecewise linear relationships in spring, summer and winter. The same period was simulated using emission and dispersion models (Motor Vehicle Emissions Factor Model/MOBILE6.2; California Line Source Dispersion Model/CALINE4). CO emissions derived from the GAM were similar, on average, to those estimated by MOBILE6.2. The same analyses for PM 2.5 showed that GAM emission estimates were much higher (by 4-5 times) than the dispersion model results, and that the traffic-PM 2.5 relationship varied seasonally. This analysis suggests that the simulation model performed reasonably well for CO, but it significantly underestimated PM 2.5 concentrations, a likely result of underestimating PM 2.5 emission factors. Comparisons between statistical and simulation models can help identify model deficiencies and improve estimates of vehicle emissions and near-road air quality.
McShane, Ryan R.; Driscoll, Katelyn P.; Sando, Roy
2017-09-27
Many approaches have been developed for measuring or estimating actual evapotranspiration (ETa), and research over many years has led to the development of remote sensing methods that are reliably reproducible and effective in estimating ETa. Several remote sensing methods can be used to estimate ETa at the high spatial resolution of agricultural fields and the large extent of river basins. More complex remote sensing methods apply an analytical approach to ETa estimation using physically based models of varied complexity that require a combination of ground-based and remote sensing data, and are grounded in the theory behind the surface energy balance model. This report, funded through cooperation with the International Joint Commission, provides an overview of selected remote sensing methods used for estimating water consumed through ETa and focuses on Mapping Evapotranspiration at High Resolution with Internalized Calibration (METRIC) and Operational Simplified Surface Energy Balance (SSEBop), two energy balance models for estimating ETa that are currently applied successfully in the United States. The METRIC model can produce maps of ETa at high spatial resolution (30 meters using Landsat data) for specific areas smaller than several hundred square kilometers in extent, an improvement in practice over methods used more generally at larger scales. Many studies validating METRIC estimates of ETa against measurements from lysimeters have shown model accuracies on daily to seasonal time scales ranging from 85 to 95 percent. The METRIC model is accurate, but the greater complexity of METRIC results in greater data requirements, and the internalized calibration of METRIC leads to greater skill required for implementation. In contrast, SSEBop is a simpler model, having reduced data requirements and greater ease of implementation without a substantial loss of accuracy in estimating ETa. The SSEBop model has been used to produce maps of ETa over very large extents (the conterminous United States) using lower spatial resolution (1 kilometer) Moderate Resolution Imaging Spectroradiometer (MODIS) data. Model accuracies ranging from 80 to 95 percent on daily to annual time scales have been shown in numerous studies that validated ETa estimates from SSEBop against eddy covariance measurements. The METRIC and SSEBop models can incorporate low and high spatial resolution data from MODIS and Landsat, but the high spatiotemporal resolution of ETa estimates using Landsat data over large extents takes immense computing power. Cloud computing is providing an opportunity for processing an increasing amount of geospatial “big data” in a decreasing period of time. For example, Google Earth EngineTM has been used to implement METRIC with automated calibration for regional-scale estimates of ETa using Landsat data. The U.S. Geological Survey also is using Google Earth EngineTM to implement SSEBop for estimating ETa in the United States at a continental scale using Landsat data.
Montone, Verona O; Fraisse, Clyde W; Peres, Natalia A; Sentelhas, Paulo C; Gleason, Mark; Ellis, Michael; Schnabel, Guido
2016-11-01
Leaf wetness duration (LWD) plays a key role in disease development and is often used as an input in disease-warning systems. LWD is often estimated using mathematical models, since measurement by sensors is rarely available and/or reliable. A strawberry disease-warning system called "Strawberry Advisory System" (SAS) is used by growers in Florida, USA, in deciding when to spray their strawberry fields to control anthracnose and Botrytis fruit rot. Currently, SAS is implemented at six locations, where reliable LWD sensors are deployed. A robust LWD model would facilitate SAS expansion from Florida to other regions where reliable LW sensors are not available. The objective of this study was to evaluate the use of mathematical models to estimate LWD and time of spray recommendations in comparison to on site LWD measurements. Specific objectives were to (i) compare model estimated and observed LWD and resulting differences in timing and number of fungicide spray recommendations, (ii) evaluate the effects of weather station sensors precision on LWD models performance, and (iii) compare LWD models performance across four states in the USA. The LWD models evaluated were the classification and regression tree (CART), dew point depression (DPD), number of hours with relative humidity equal or greater than 90 % (NHRH ≥90 %), and Penman-Monteith (P-M). P-M model was expected to have the lowest errors, since it is a physically based and thus portable model. Indeed, the P-M model estimated LWD most accurately (MAE <2 h) at a weather station with high precision sensors but was the least accurate when lower precision sensors of relative humidity and estimated net radiation (based on solar radiation and temperature) were used (MAE = 3.7 h). The CART model was the most robust for estimating LWD and for advising growers on fungicide-spray timing for anthracnose and Botrytis fruit rot control and is therefore the model we recommend for expanding the strawberry disease warning beyond Florida, to other locations where weather stations may be deployed with lower precision sensors, and net radiation observations are not available.
NASA Astrophysics Data System (ADS)
Montone, Verona O.; Fraisse, Clyde W.; Peres, Natalia A.; Sentelhas, Paulo C.; Gleason, Mark; Ellis, Michael; Schnabel, Guido
2016-11-01
Leaf wetness duration (LWD) plays a key role in disease development and is often used as an input in disease-warning systems. LWD is often estimated using mathematical models, since measurement by sensors is rarely available and/or reliable. A strawberry disease-warning system called "Strawberry Advisory System" (SAS) is used by growers in Florida, USA, in deciding when to spray their strawberry fields to control anthracnose and Botrytis fruit rot. Currently, SAS is implemented at six locations, where reliable LWD sensors are deployed. A robust LWD model would facilitate SAS expansion from Florida to other regions where reliable LW sensors are not available. The objective of this study was to evaluate the use of mathematical models to estimate LWD and time of spray recommendations in comparison to on site LWD measurements. Specific objectives were to (i) compare model estimated and observed LWD and resulting differences in timing and number of fungicide spray recommendations, (ii) evaluate the effects of weather station sensors precision on LWD models performance, and (iii) compare LWD models performance across four states in the USA. The LWD models evaluated were the classification and regression tree (CART), dew point depression (DPD), number of hours with relative humidity equal or greater than 90 % (NHRH ≥90 %), and Penman-Monteith (P-M). P-M model was expected to have the lowest errors, since it is a physically based and thus portable model. Indeed, the P-M model estimated LWD most accurately (MAE <2 h) at a weather station with high precision sensors but was the least accurate when lower precision sensors of relative humidity and estimated net radiation (based on solar radiation and temperature) were used (MAE = 3.7 h). The CART model was the most robust for estimating LWD and for advising growers on fungicide-spray timing for anthracnose and Botrytis fruit rot control and is therefore the model we recommend for expanding the strawberry disease warning beyond Florida, to other locations where weather stations may be deployed with lower precision sensors, and net radiation observations are not available.
NASA Technical Reports Server (NTRS)
Ratnayake, Nalin A.; Koshimoto, Ed T.; Taylor, Brian R.
2011-01-01
The problem of parameter estimation on hybrid-wing-body type aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aero- dynamic control effectors that act in coplanar motion. This fact adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of system inputs must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, asymmetric, single-surface maneuvers are used to excite multiple axes of aircraft motion simultaneously. Time history reconstructions of the moment coefficients computed by the solved regression models are then compared to each other in order to assess relative model accuracy. The reduced flight-test time required for inner surface parameter estimation using multi-axis methods was found to come at the cost of slightly reduced accuracy and statistical confidence for linear regression methods. Since the multi-axis maneuvers captured parameter estimates similar to both longitudinal and lateral-directional maneuvers combined, the number of test points required for the inner, aileron-like surfaces could in theory have been reduced by 50%. While trends were similar, however, individual parameters as estimated by a multi-axis model were typically different by an average absolute difference of roughly 15-20%, with decreased statistical significance, than those estimated by a single-axis model. The multi-axis model exhibited an increase in overall fit error of roughly 1-5% for the linear regression estimates with respect to the single-axis model, when applied to flight data designed for each, respectively.
Li, Hui-lin; Han, Yong; Cai, Zu-cong
2008-04-01
The ammonia volatilization on the Typic Gleyi-stagnic Anthrosol with application of common urea and controlled release urea (LP-S100) fertilizers in the rice seasons in paddy soil of Taihui region of China was modeled by Jayaweera-Mikkelsen model. Results showed great difference of ammonia volatilization from two type fertilizers was detected with lysimeter experiment in the rice season. Nitrogen loss via ammonia volatilization after common urea application with conventional ways was 29%-35%, while only 5% of controlled release urea-N was volatilized. The Jayaweera-Mikkelsen model was over estimated the total amount of ammonia volatilization in the whole season, and great deviation from the measured data was obvious for the higher volatilization from common urea fertilizer. The estimated data were 2.95-4.19 times of the measures one for common urea treatments, while they were 1.19-1.40 times of those measured for LP-S100 treatments. The order of magnitude quotient was one of the indicators to evaluate the model estimation. The value of it was 0.8, which indicated the estimation of the model need improvement. Though sensitive analysis for the five parameters in the model was tested and amended the parameter of the concentration of NH4+ -N, a limited term was inducted in the model operation. The amended model got better results as the ratio of estimation to measured data was decreased to 1.12-1.28. The alga activity in the paddy field influenced ammonia volatilization and might make the failure of the model estimation of the original model.
Modeling qRT-PCR dynamics with application to cancer biomarker quantification.
Chervoneva, Inna; Freydin, Boris; Hyslop, Terry; Waldman, Scott A
2017-01-01
Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is widely used for molecular diagnostics and evaluating prognosis in cancer. The utility of mRNA expression biomarkers relies heavily on the accuracy and precision of quantification, which is still challenging for low abundance transcripts. The critical step for quantification is accurate estimation of efficiency needed for computing a relative qRT-PCR expression. We propose a new approach to estimating qRT-PCR efficiency based on modeling dynamics of polymerase chain reaction amplification. In contrast, only models for fluorescence intensity as a function of polymerase chain reaction cycle have been used so far for quantification. The dynamics of qRT-PCR efficiency is modeled using an ordinary differential equation model, and the fitted ordinary differential equation model is used to obtain effective polymerase chain reaction efficiency estimates needed for efficiency-adjusted quantification. The proposed new qRT-PCR efficiency estimates were used to quantify GUCY2C (Guanylate Cyclase 2C) mRNA expression in the blood of colorectal cancer patients. Time to recurrence and GUCY2C expression ratios were analyzed in a joint model for survival and longitudinal outcomes. The joint model with GUCY2C quantified using the proposed polymerase chain reaction efficiency estimates provided clinically meaningful results for association between time to recurrence and longitudinal trends in GUCY2C expression.
Temporal analysis of the October 1989 proton flare using computerized anatomical models
NASA Technical Reports Server (NTRS)
Simonsen, L. C.; Cucinotta, F. A.; Atwell, W.; Nealy, J. E.
1993-01-01
The GOES-7 time history data of hourly averaged integral proton fluxes at various particle kinetic energies are analyzed for the solar proton event that occurred between October 19 and 29, 1989. By analyzing the time history data, the dose rates which may vary over many orders of magnitude in the early phases of the flare can be estimated as well as the cumulative dose as a function of time. Basic transport calculations are coupled with detailed body organ thickness distributions from computerized anatomical models to estimate dose rates and cumulative doses to 20 critical body organs. For a 5-cm-thick water shield, cumulative skin, eye, and blood-forming-organ dose equivalents of 1.27, 1.23, and 0.41 Sv, respectively, are estimated. These results are approximately 40-50 percent less than the widely used 0- and 5-cm slab dose estimates. The risk of cancer incidence and mortality are also estimated for astronauts protected by various water shield thicknesses.
Baum, Rex L.; Godt, Jonathan W.; Savage, William Z.
2010-01-01
Shallow rainfall-induced landslides commonly occur under conditions of transient infiltration into initially unsaturated soils. In an effort to predict the timing and location of such landslides, we developed a model of the infiltration process using a two-layer system that consists of an unsaturated zone above a saturated zone and implemented this model in a geographic information system (GIS) framework. The model links analytical solutions for transient, unsaturated, vertical infiltration above the water table to pressure-diffusion solutions for pressure changes below the water table. The solutions are coupled through a transient water table that rises as water accumulates at the base of the unsaturated zone. This scheme, though limited to simplified soil-water characteristics and moist initial conditions, greatly improves computational efficiency over numerical models in spatially distributed modeling applications. Pore pressures computed by these coupled models are subsequently used in one-dimensional slope-stability computations to estimate the timing and locations of slope failures. Applied over a digital landscape near Seattle, Washington, for an hourly rainfall history known to trigger shallow landslides, the model computes a factor of safety for each grid cell at any time during a rainstorm. The unsaturated layer attenuates and delays the rainfall-induced pore-pressure response of the model at depth, consistent with observations at an instrumented hillside near Edmonds, Washington. This attenuation results in realistic estimates of timing for the onset of slope instability (7 h earlier than observed landslides, on average). By considering the spatial distribution of physical properties, the model predicts the primary source areas of landslides.
Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models
ERIC Educational Resources Information Center
Price, Larry R.
2012-01-01
The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
NASA Astrophysics Data System (ADS)
Ma, Hongliang; Xu, Shijie
2014-09-01
This paper presents an improved real-time sequential filter (IRTSF) for magnetometer-only attitude and angular velocity estimation of spacecraft during its attitude changing (including fast and large angular attitude maneuver, rapidly spinning or uncontrolled tumble). In this new magnetometer-only attitude determination technique, both attitude dynamics equation and first time derivative of measured magnetic field vector are directly leaded into filtering equations based on the traditional single vector attitude determination method of gyroless and real-time sequential filter (RTSF) of magnetometer-only attitude estimation. The process noise model of IRTSF includes attitude kinematics and dynamics equations, and its measurement model consists of magnetic field vector and its first time derivative. The observability of IRTSF for small or large angular velocity changing spacecraft is evaluated by an improved Lie-Differentiation, and the degrees of observability of IRTSF for different initial estimation errors are analyzed by the condition number and a solved covariance matrix. Numerical simulation results indicate that: (1) the attitude and angular velocity of spacecraft can be estimated with sufficient accuracy using IRTSF from magnetometer-only data; (2) compared with that of RTSF, the estimation accuracies and observability degrees of attitude and angular velocity using IRTSF from magnetometer-only data are both improved; and (3) universality: the IRTSF of magnetometer-only attitude and angular velocity estimation is observable for any different initial state estimation error vector.
Space-Time Error Representation and Estimation in Navier-Stokes Calculations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2006-01-01
The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2000-01-01
We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.
NASA Astrophysics Data System (ADS)
Scharnagl, B.; Vrugt, J. A.; Vereecken, H.; Herbst, M.
2010-02-01
A major drawback of current soil organic carbon (SOC) models is that their conceptually defined pools do not necessarily correspond to measurable SOC fractions in real practice. This not only impairs our ability to rigorously evaluate SOC models but also makes it difficult to derive accurate initial states of the individual carbon pools. In this study, we tested the feasibility of inverse modelling for estimating pools in the Rothamsted carbon model (ROTHC) using mineralization rates observed during incubation experiments. This inverse approach may provide an alternative to existing SOC fractionation methods. To illustrate our approach, we used a time series of synthetically generated mineralization rates using the ROTHC model. We adopted a Bayesian approach using the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to infer probability density functions of the various carbon pools at the start of incubation. The Kullback-Leibler divergence was used to quantify the information content of the mineralization rate data. Our results indicate that measured mineralization rates generally provided sufficient information to reliably estimate all carbon pools in the ROTHC model. The incubation time necessary to appropriately constrain all pools was about 900 days. The use of prior information on microbial biomass carbon significantly reduced the uncertainty of the initial carbon pools, decreasing the required incubation time to about 600 days. Simultaneous estimation of initial carbon pools and decomposition rate constants significantly increased the uncertainty of the carbon pools. This effect was most pronounced for the intermediate and slow pools. Altogether, our results demonstrate that it is particularly difficult to derive reasonable estimates of the humified organic matter pool and the inert organic matter pool from inverse modelling of mineralization rates observed during incubation experiments.
FISM 2.0: Improved Spectral Range, Resolution, and Accuracy
NASA Technical Reports Server (NTRS)
Chamberlin, Phillip C.
2012-01-01
The Flare Irradiance Spectral Model (FISM) was first released in 2005 to provide accurate estimates of the solar VUV (0.1-190 nm) irradiance to the Space Weather community. This model was based on TIMED SEE as well as UARS and SORCE SOLSTICE measurements, and was the first model to include a 60 second temporal variation to estimate the variations due to solar flares. Along with flares, FISM also estimates the tradition solar cycle and solar rotational variations over months and decades back to 1947. This model has been highly successful in providing driving inputs to study the affect of solar irradiance variations on the Earth's ionosphere and thermosphere, lunar dust charging, as well as the Martian ionosphere. The second version of FISM, FISM2, is currently being updated to be based on the more accurate SDO/EVE data, which will provide much more accurate estimations in the 0.1-105 nm range, as well as extending the 'daily' model variation up to 300 nm based on the SOLSTICE measurements. with the spectral resolution of SDO/EVE along with SOLSTICE and the TIMED and SORCE XPS 'model' products, the entire range from 0.1-300 nm will also be available at 0.1 nm, allowing FISM2 to be improved a similar 0.1nm spectral bins. FISM also will have a TSI component that will estimate the total radiated energy during flares based on the few TSI flares observed to date. Presented here will be initial results of the FISM2 modeling efforts, as well as some challenges that will need to be overcome in order for FISM2 to accurately model the solar variations on time scales of seconds to decades.
Wiederholt, Ruscena; Bagstad, Kenneth J.; McCracken, Gary F.; Diffendorfer, Jay E.; Loomis, John B.; Semmens, Darius J.; Russell, Amy L.; Sansone, Chris; LaSharr, Kelsie; Cryan, Paul; Reynoso, Claudia; Medellin, Rodrigo A.; Lopez-Hoffman, Laura
2017-01-01
Given rapid changes in agricultural practice, it is critical to understand how alterations in ecological, technological, and economic conditions over time and space impact ecosystem services in agroecosystems. Here, we present a benefit transfer approach to quantify cotton pest-control services provided by a generalist predator, the Mexican free-tailed bat (Tadarida brasiliensis mexicana), in the southwestern United States. We show that pest-control estimates derived using (1) a compound spatial–temporal model – which incorporates spatial and temporal variability in crop pest-control service values – are likely to exhibit less error than those derived using (2) a simple-spatial model (i.e., a model that extrapolates values derived for one area directly, without adjustment, to other areas) or (3) a simple-temporal model (i.e., a model that extrapolates data from a few points in time over longer time periods). Using our compound spatial–temporal approach, the annualized pest-control value was \\$12.2 million, in contrast to an estimate of \\$70.1 million (5.7 times greater), obtained from the simple-spatial approach. Using estimates from one year (simple-temporal approach) revealed large value differences (0.4 times smaller to 2 times greater). Finally, we present a detailed protocol for valuing pest-control services, which can be used to develop robust pest-control transfer functions for generalist predators in agroecosystems.
GPS constraints on M 7-8 earthquake recurrence times for the New Madrid seismic zone
Stuart, W.D.
2001-01-01
Newman et al. (1999) estimate the time interval between the 1811-1812 earthquake sequence near New Madrid, Missouri and a future similar sequence to be at least 2,500 years, an interval significantly longer than other recently published estimates. To calculate the recurrence time, they assume that slip on a vertical half-plane at depth contributes to the current interseismic motion of GPS benchmarks. Compared to other plausible fault models, the half-plane model gives nearly the maximum rate of ground motion for the same interseismic slip rate. Alternative models with smaller interseismic fault slip area can satisfy the present GPS data by having higher slip rate and thus can have earthquake recurrence times much less than 2,500 years.
Blind identification of nonlinear models with non-Gaussian inputs
NASA Astrophysics Data System (ADS)
Prakriya, Shankar; Pasupathy, Subbarayan; Hatzinakos, Dimitrios
1995-12-01
Some methods are proposed for the blind identification of finite-order discrete-time nonlinear models with non-Gaussian circular inputs. The nonlinear models consist of two finite memory linear time invariant (LTI) filters separated by a zero-memory nonlinearity (ZMNL) of the polynomial type (the LTI-ZMNL-LTI models). The linear subsystems are allowed to be of non-minimum phase (NMP). The methods base their estimates of the impulse responses on slices of the N plus 1th order polyspectra of the output sequence. It is shown that the identification of LTI-ZMNL systems requires only a 1-D moment or polyspectral slice. The coefficients of the ZMNL are not estimated, and need not be known. The order of the nonlinearity can, in theory, be estimated from the received signal. These methods possess several noise and interference suppression characteristics, and have applications in modeling nonlinearly amplified QAM/QPSK signals in digital satellite and microwave communications.
Palazoğlu, T K; Gökmen, V
2008-04-01
In this study, a numerical model was developed to simulate frying of potato strips and estimate acrylamide levels in French fries. Heat and mass transfer parameters determined during frying of potato strips and the formation and degradation kinetic parameters of acrylamide obtained with a sugar-asparagine model system were incorporated within the model. The effect of reducing sugar content (0.3 to 2.15 g/100 g dry matter), strip thickness (8.5 x 8.5 mm and 10 x 10 mm), and frying time (3, 4, 5, and 6 min) and temperature (150, 170, and 190 degrees C) on resultant acrylamide level in French fries was investigated both numerically and experimentally. The model appeared to closely estimate the acrylamide contents, and thereby may potentially save considerable time, money, and effort during the stages of process design and optimization.
Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes
NASA Astrophysics Data System (ADS)
Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin
2014-05-01
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.
Williams, K.A.; Frederick, P.C.; Nichols, J.D.
2011-01-01
Many populations of animals are fluid in both space and time, making estimation of numbers difficult. Much attention has been devoted to estimation of bias in detection of animals that are present at the time of survey. However, an equally important problem is estimation of population size when all animals are not present on all survey occasions. Here, we showcase use of the superpopulation approach to capture-recapture modeling for estimating populations where group membership is asynchronous, and where considerable overlap in group membership among sampling occasions may occur. We estimate total population size of long-legged wading bird (Great Egret and White Ibis) breeding colonies from aerial observations of individually identifiable nests at various times in the nesting season. Initiation and termination of nests were analogous to entry and departure from a population. Estimates using the superpopulation approach were 47-382% larger than peak aerial counts of the same colonies. Our results indicate that the use of the superpopulation approach to model nesting asynchrony provides a considerably less biased and more efficient estimate of nesting activity than traditional methods. We suggest that this approach may also be used to derive population estimates in a variety of situations where group membership is fluid. ?? 2011 by the Ecological Society of America.
Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane
2017-07-12
The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques.
Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; De Gaetani, Carlo Iapige; Pinto, Livio; Tornatore, Vincenza
2018-03-02
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D'Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations.
Safety analytics for integrating crash frequency and real-time risk modeling for expressways.
Wang, Ling; Abdel-Aty, Mohamed; Lee, Jaeyoung
2017-07-01
To find crash contributing factors, there have been numerous crash frequency and real-time safety studies, but such studies have been conducted independently. Until this point, no researcher has simultaneously analyzed crash frequency and real-time crash risk to test whether integrating them could better explain crash occurrence. Therefore, this study aims at integrating crash frequency and real-time safety analyses using expressway data. A Bayesian integrated model and a non-integrated model were built: the integrated model linked the crash frequency and the real-time models by adding the logarithm of the estimated expected crash frequency in the real-time model; the non-integrated model independently estimated the crash frequency and the real-time crash risk. The results showed that the integrated model outperformed the non-integrated model, as it provided much better model results for both the crash frequency and the real-time models. This result indicated that the added component, the logarithm of the expected crash frequency, successfully linked and provided useful information to the two models. This study uncovered few variables that are not typically included in the crash frequency analysis. For example, the average daily standard deviation of speed, which was aggregated based on speed at 1-min intervals, had a positive effect on crash frequency. In conclusion, this study suggested a methodology to improve the crash frequency and real-time models by integrating them, and it might inspire future researchers to understand crash mechanisms better. Copyright © 2017 Elsevier Ltd. All rights reserved.
Iwata, Michio; Miyawaki-Kuwakado, Atsuko; Yoshida, Erika; Komori, Soichiro; Shiraishi, Fumihide
2018-02-02
In a mathematical model, estimation of parameters from time-series data of metabolic concentrations in cells is a challenging task. However, it seems that a promising approach for such estimation has not yet been established. Biochemical Systems Theory (BST) is a powerful methodology to construct a power-law type model for a given metabolic reaction system and to then characterize it efficiently. In this paper, we discuss the use of an S-system root-finding method (S-system method) to estimate parameters from time-series data of metabolite concentrations. We demonstrate that the S-system method is superior to the Newton-Raphson method in terms of the convergence region and iteration number. We also investigate the usefulness of a translocation technique and a complex-step differentiation method toward the practical application of the S-system method. The results indicate that the S-system method is useful to construct mathematical models for a variety of metabolic reaction networks. Copyright © 2018 Elsevier Inc. All rights reserved.