NASA Technical Reports Server (NTRS)
Choi, Sung R.; Salem, Jonathan A.; Holland, Frederic A.
1997-01-01
The two estimation methods, individual data and arithmetic mean methods, were used to determine the slow crack growth (SCG) parameters (n and D) of advanced ceramics and glass from a large number of room- and elevated-temperature constant stress-rate ('dynamic fatigue') test data. For ceramic materials with Weibull modulus greater than 10, the difference in the SCG parameters between the two estimation methods was negligible; whereas, for glass specimens exhibiting Weibull modulus of about 3, the difference was amplified, resulting in a maximum difference of 16 and 13 %, respectively, in n and D. Of the two SCG parameters, the parameter n was more sensitive to the estimation method than the other. The coefficient of variation in n was found to be somewhat greater in the individual data method than in the arithmetic mean method.
Charles E. Rose; Thomas B. Lynch
2001-01-01
A method was developed for estimating parameters in an individual tree basal area growth model using a system of equations based on dbh rank classes. The estimation method developed is a compromise between an individual tree and a stand level basal area growth model that accounts for the correlation between trees within a plot by using seemingly unrelated regression (...
Is Bayesian Estimation Proper for Estimating the Individual's Ability? Research Report 80-3.
ERIC Educational Resources Information Center
Samejima, Fumiko
The effect of prior information in Bayesian estimation is considered, mainly from the standpoint of objective testing. In the estimation of a parameter belonging to an individual, the prior information is, in most cases, the density function of the population to which the individual belongs. Bayesian estimation was compared with maximum likelihood…
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
Estimating demographic parameters using a combination of known-fate and open N-mixture models
Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.
2015-01-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.
Estimating demographic parameters using a combination of known-fate and open N-mixture models.
Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G
2015-10-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.
Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L
2010-04-01
The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.
Borchers, D L; Langrock, R
2015-12-01
We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
NASA Astrophysics Data System (ADS)
Post, Hanna; Vrugt, Jasper A.; Fox, Andrew; Vereecken, Harry; Hendricks Franssen, Harrie-Jan
2017-03-01
The Community Land Model (CLM) contains many parameters whose values are uncertain and thus require careful estimation for model application at individual sites. Here we used Bayesian inference with the DiffeRential Evolution Adaptive Metropolis (DREAM(zs)) algorithm to estimate eight CLM v.4.5 ecosystem parameters using 1 year records of half-hourly net ecosystem CO2 exchange (NEE) observations of four central European sites with different plant functional types (PFTs). The posterior CLM parameter distributions of each site were estimated per individual season and on a yearly basis. These estimates were then evaluated using NEE data from an independent evaluation period and data from "nearby" FLUXNET sites at 600 km distance to the original sites. Latent variables (multipliers) were used to treat explicitly uncertainty in the initial carbon-nitrogen pools. The posterior parameter estimates were superior to their default values in their ability to track and explain the measured NEE data of each site. The seasonal parameter values reduced with more than 50% (averaged over all sites) the bias in the simulated NEE values. The most consistent performance of CLM during the evaluation period was found for the posterior parameter values of the forest PFTs, and contrary to the C3-grass and C3-crop sites, the latent variables of the initial pools further enhanced the quality-of-fit. The carbon sink function of the forest PFTs significantly increased with the posterior parameter estimates. We thus conclude that land surface model predictions of carbon stocks and fluxes require careful consideration of uncertain ecological parameters and initial states.
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand
2015-09-25
Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tong, M.; Xue, M.
2006-12-01
An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.
Relative effects of survival and reproduction on the population dynamics of emperor geese
Schmutz, Joel A.; Rockwell, Robert F.; Petersen, Margaret R.
1997-01-01
Populations of emperor geese (Chen canagica) in Alaska declined sometime between the mid-1960s and the mid-1980s and have increased little since. To promote recovery of this species to former levels, managers need to know how much their perturbations of survival and/or reproduction would affect population growth rate (λ). We constructed an individual-based population model to evaluate the relative effect of altering mean values of various survival and reproductive parameters on λ and fall age structure (AS, defined as the proportion of juv), assuming additive rather than compensatory relations among parameters. Altering survival of adults had markedly greater relative effects on λ than did equally proportionate changes in either juvenile survival or reproductive parameters. We found the opposite pattern for relative effects on AS. Due to concerns about bias in the initial parameter estimates used in our model, we used 5 additional sets of parameter estimates with this model structure. We found that estimates of survival based on aerial survey data gathered each fall resulted in models that corresponded more closely to independent estimates of λ than did models that used mark-recapture estimates of survival. This disparity suggests that mark-recapture estimates of survival are biased low. To further explore how parameter estimates affected estimates of λ, we used values of survival and reproduction found in other goose species, and we examined the effect of an hypothesized correlation between an individual's clutch size and the subsequent survival of her young. The rank order of parameters in their relative effects on λ was consistent for all 6 parameter sets we examined. The observed variation in relative effects on λ among the 6 parameter sets is indicative of how relative effects on λ may vary among goose populations. With this knowledge of the relative effects of survival and reproductive parameters on λ, managers can make more informed decisions about which parameters to influence through management or to target for future study.
Empirical Bayes estimation of proportions with application to cowbird parasitism rates
Link, W.A.; Hahn, D.C.
1996-01-01
Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).
Liang, Hua; Miao, Hongyu; Wu, Hulin
2010-03-01
Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.
Sudakov, S K; Nazarova, G A; Alekseeva, E V; Bashkatova, V G
2013-07-01
We compared individual anxiety assessed by three standard tests, open-field test, elevated plus-maze test, and Vogel conflict drinking test, in the same animals. No significant correlations between the main anxiety parameters were found in these three experimental models. Groups of animals with high and low anxiety rats were formed by a single parameter and subsequent selection of two extreme groups (10%). It was found that none of the tests could be used for reliable estimation of individual anxiety in rats. The individual anxiety level with high degree of confidence was determined in high-anxiety and low-anxiety rats demonstrating behavioral parameters above and below the mean values in all tests used. Therefore, several tests should be used for evaluation of the individual anxiety or sensitivity to emotional stress.
Estimating transition probabilities in unmarked populations --entropy revisited
Cooch, E.G.; Link, W.A.
1999-01-01
The probability of surviving and moving between 'states' is of great interest to biologists. Robust estimation of these transitions using multiple observations of individually identifiable marked individuals has received considerable attention in recent years. However, in some situations, individuals are not identifiable (or have a very low recapture rate), although all individuals in a sample can be assigned to a particular state (e.g. breeding or non-breeding) without error. In such cases, only aggregate data (number of individuals in a given state at each occasion) are available. If the underlying matrix of transition probabilities does not vary through time and aggregate data are available for several time periods, then it is possible to estimate these parameters using least-squares methods. Even when such data are available, this assumption of stationarity will usually be deemed overly restrictive and, frequently, data will only be available for two time periods. In these cases, the problem reduces to estimating the most likely matrix (or matrices) leading to the observed frequency distribution of individuals in each state. An entropy maximization approach has been previously suggested. In this paper, we show that the entropy approach rests on a particular limiting assumption, and does not provide estimates of latent population parameters (the transition probabilities), but rather predictions of realized rates.
Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation
NASA Astrophysics Data System (ADS)
Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei
2018-04-01
Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.
NASA Astrophysics Data System (ADS)
Luo, Ning; Illman, Walter A.
2016-09-01
Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.
A Note on Recurring Misconceptions When Fitting Nonlinear Mixed Models.
Harring, Jeffrey R; Blozis, Shelley A
2016-01-01
Nonlinear mixed-effects (NLME) models are used when analyzing continuous repeated measures data taken on each of a number of individuals where the focus is on characteristics of complex, nonlinear individual change. Challenges with fitting NLME models and interpreting analytic results have been well documented in the statistical literature. However, parameter estimates as well as fitted functions from NLME analyses in recent articles have been misinterpreted, suggesting the need for clarification of these issues before these misconceptions become fact. These misconceptions arise from the choice of popular estimation algorithms, namely, the first-order linearization method (FO) and Gaussian-Hermite quadrature (GHQ) methods, and how these choices necessarily lead to population-average (PA) or subject-specific (SS) interpretations of model parameters, respectively. These estimation approaches also affect the fitted function for the typical individual, the lack-of-fit of individuals' predicted trajectories, and vice versa.
Outlier Detection in Infrared Signatures
1992-01-01
for model idcntification. Gnanadcsikan (1977) pointed out that Hampci’s influence function (Huampcl (1974)) can bc used to estimate the effect...individual outliers have on sample estimates of parameters. Chernick noted that the influence function for parameters of intcrcst to the users of a data...important outliers, while those with amall estimated influence are not). In this way the influence function provides a "distance" measure for multi
Wicke, Jason; Dumas, Genevieve A; Costigan, Patrick A
2009-01-05
Modeling of the body segments to estimate segment inertial parameters is required in the kinetic analysis of human motion. A new geometric model for the trunk has been developed that uses various cross-sectional shapes to estimate segment volume and adopts a non-uniform density function that is gender-specific. The goal of this study was to test the accuracy of the new model for estimating the trunk's inertial parameters by comparing it to the more current models used in biomechanical research. Trunk inertial parameters estimated from dual X-ray absorptiometry (DXA) were used as the standard. Twenty-five female and 24 male college-aged participants were recruited for the study. Comparisons of the new model to the accepted models were accomplished by determining the error between the models' trunk inertial estimates and that from DXA. Results showed that the new model was more accurate across all inertial estimates than the other models. The new model had errors within 6.0% for both genders, whereas the other models had higher average errors ranging from 10% to over 50% and were much more inconsistent between the genders. In addition, there was little consistency in the level of accuracy for the other models when estimating the different inertial parameters. These results suggest that the new model provides more accurate and consistent trunk inertial estimates than the other models for both female and male college-aged individuals. However, similar studies need to be performed using other populations, such as elderly or individuals from a distinct morphology (e.g. obese). In addition, the effect of using different models on the outcome of kinetic parameters, such as joint moments and forces needs to be assessed.
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models
Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
Estimating parameters of hidden Markov models based on marked individuals: use of robust design data
Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun
2012-01-01
Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).
A seasonal Bartlett-Lewis Rectangular Pulse model
NASA Astrophysics Data System (ADS)
Ritschel, Christoph; Agbéko Kpogo-Nuwoklo, Komlan; Rust, Henning; Ulbrich, Uwe; Névir, Peter
2016-04-01
Precipitation time series with a high temporal resolution are needed as input for several hydrological applications, e.g. river runoff or sewer system models. As adequate observational data sets are often not available, simulated precipitation series come to use. Poisson-cluster models are commonly applied to generate these series. It has been shown that this class of stochastic precipitation models is able to well reproduce important characteristics of observed rainfall. For the gauge based case study presented here, the Bartlett-Lewis rectangular pulse model (BLRPM) has been chosen. As it has been shown that certain model parameters vary with season in a midlatitude moderate climate due to different rainfall mechanisms dominating in winter and summer, model parameters are typically estimated separately for individual seasons or individual months. Here, we suggest a simultaneous parameter estimation for the whole year under the assumption that seasonal variation of parameters can be described with harmonic functions. We use an observational precipitation series from Berlin with a high temporal resolution to exemplify the approach. We estimate BLRPM parameters with and without this seasonal extention and compare the results in terms of model performance and robustness of the estimation.
Sensitivity of estimated muscle force in forward simulation of normal walking
Xiao, Ming; Higginson, Jill
2009-01-01
Generic muscle parameters are often used in muscle-driven simulations of human movement estimate individual muscle forces and function. The results may not be valid since muscle properties vary from subject to subject. This study investigated the effect of using generic parameters in a muscle-driven forward simulation on muscle force estimation. We generated a normal walking simulation in OpenSim and examined the sensitivity of individual muscle to perturbations in muscle parameters, including the number of muscles, maximum isometric force, optimal fiber length and tendon slack length. We found that when changing the number muscles included in the model, only magnitude of the estimated muscle forces was affected. Our results also suggest it is especially important to use accurate values of tendon slack length and optimal fiber length for ankle plantarflexors and knee extensors. Changes in force production one muscle were typically compensated for by changes in force production by muscles in the same functional muscle group, or the antagonistic muscle group. Conclusions regarding muscle function based on simulations with generic musculoskeletal parameters should be interpreted with caution. PMID:20498485
NASA Astrophysics Data System (ADS)
Pombo, Maíra; Denadai, Márcia Regina; Turra, Alexander
2013-05-01
Knowledge of population parameters and the ability to predict their responses to environmental changes are useful tools to aid in the appropriate management and conservation of natural resources. Samples of the sciaenid fish Stellifer rastrifer were taken from August 2003 through October 2004 in shallow areas of Caraguatatuba Bight, southeastern Brazil. The results showed a consistent presence of length-frequency classes throughout the year and low values of the gonadosomatic index of this species, indicating that the area is not used for spawning or residence of adults, but rather shelters individuals in late stages of development. The results may serve as a caveat for assessments of transitional areas such as the present one, the nursery function of which is neglected compared to estuaries and mangroves. The danger of mismanaging these areas by not considering their peculiarities is emphasized by using these data as a study case for the development of some broadly used population-parameter analyses. The individuals' body growth parameters from the von Bertalanffy model were estimated based on the most common approaches, and the best values obtained from traditional quantification methods of selection were very prone to bias. The low gonadosomatic index (GSI) estimated during the period was an important factor in stimulating us to select more reliable parameters of body growth (L∞ = 20.9, K = 0.37 and Z = 2.81), which were estimated based on assuming the existence of spatial segregation by size. The data obtained suggest that the estimated mortality rate included a high rate of migration of older individuals to deeper areas, where we assume that they completed their development.
NASA Technical Reports Server (NTRS)
Ratnayake, Nalin A.; Koshimoto, Ed T.; Taylor, Brian R.
2011-01-01
The problem of parameter estimation on hybrid-wing-body type aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aero- dynamic control effectors that act in coplanar motion. This fact adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of system inputs must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, asymmetric, single-surface maneuvers are used to excite multiple axes of aircraft motion simultaneously. Time history reconstructions of the moment coefficients computed by the solved regression models are then compared to each other in order to assess relative model accuracy. The reduced flight-test time required for inner surface parameter estimation using multi-axis methods was found to come at the cost of slightly reduced accuracy and statistical confidence for linear regression methods. Since the multi-axis maneuvers captured parameter estimates similar to both longitudinal and lateral-directional maneuvers combined, the number of test points required for the inner, aileron-like surfaces could in theory have been reduced by 50%. While trends were similar, however, individual parameters as estimated by a multi-axis model were typically different by an average absolute difference of roughly 15-20%, with decreased statistical significance, than those estimated by a single-axis model. The multi-axis model exhibited an increase in overall fit error of roughly 1-5% for the linear regression estimates with respect to the single-axis model, when applied to flight data designed for each, respectively.
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.; Dudley, Kenneth
2003-01-01
A simple method is presented to estimate the complex dielectric constants of individual layers of a multilayer composite material. Using the MatLab Optimization Tools simple MatLab scripts are written to search for electric properties of individual layers so as to match the measured and calculated S-parameters. A single layer composite material formed by using materials such as Bakelite, Nomex Felt, Fiber Glass, Woven Composite B and G, Nano Material #0, Cork, Garlock, of different thicknesses are tested using the present approach. Assuming the thicknesses of samples unknown, the present approach is shown to work well in estimating the dielectric constants and the thicknesses. A number of two layer composite materials formed by various combinations of above individual materials are tested using the present approach. However, the present approach could not provide estimate values close to their true values when the thicknesses of individual layers were assumed to be unknown. This is attributed to the difficulty in modelling the presence of airgaps between the layers while doing the measurement of S-parameters. A few examples of three layer composites are also presented.
Heinonen, Johannes P M; Palmer, Stephen C F; Redpath, Steve M; Travis, Justin M J
2014-01-01
Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions.
Heinonen, Johannes P. M.; Palmer, Stephen C. F.; Redpath, Steve M.; Travis, Justin M. J.
2014-01-01
Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions. PMID:25405860
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
White, L J; Evans, N D; Lam, T J G M; Schukken, Y H; Medley, G F; Godfrey, K R; Chappell, M J
2002-01-01
A mathematical model for the transmission of two interacting classes of mastitis causing bacterial pathogens in a herd of dairy cows is presented and applied to a specific data set. The data were derived from a field trial of a specific measure used in the control of these pathogens, where half the individuals were subjected to the control and in the others the treatment was discontinued. The resultant mathematical model (eight non-linear simultaneous ordinary differential equations) therefore incorporates heterogeneity in the host as well as the infectious agent and consequently the effects of control are intrinsic in the model structure. A structural identifiability analysis of the model is presented demonstrating that the scope of the novel method used allows application to high order non-linear systems. The results of a simultaneous estimation of six unknown system parameters are presented. Previous work has only estimated a subset of these either simultaneously or individually. Therefore not only are new estimates provided for the parameters relating to the transmission and control of the classes of pathogens under study, but also information about the relationships between them. We exploit the close link between mathematical modelling, structural identifiability analysis, and parameter estimation to obtain biological insights into the system modelled.
Urban air quality estimation study, phase 1
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1976-01-01
Possibilities are explored for applying estimation theory to the analysis, interpretation, and use of air quality measurements in conjunction with simulation models to provide a cost effective method of obtaining reliable air quality estimates for wide urban areas. The physical phenomenology of real atmospheric plumes from elevated localized sources is discussed. A fluctuating plume dispersion model is derived. Individual plume parameter formulations are developed along with associated a priori information. Individual measurement models are developed.
Preference heterogeneity in a count data model of demand for off-highway vehicle recreation
Thomas P Holmes; Jeffrey E Englin
2010-01-01
This paper examines heterogeneity in the preferences for OHV recreation by applying the random parameters Poisson model to a data set of off-highway vehicle (OHV) users at four National Forest sites in North Carolina. The analysis develops estimates of individual consumer surplus and finds that estimates are systematically affected by the random parameter specification...
Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela
2013-05-01
Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.
Glöckner, Andreas; Pachur, Thorsten
2012-04-01
In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data
NASA Astrophysics Data System (ADS)
Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.
2016-06-01
Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.
Latorre, Jorge; Llorens, Roberto; Colomer, Carolina; Alcañiz, Mariano
2018-04-27
Different studies have analyzed the potential of the off-the-shelf Microsoft Kinect, in its different versions, to estimate spatiotemporal gait parameters as a portable markerless low-cost alternative to laboratory grade systems. However, variability in populations, measures, and methodologies prevents accurate comparison of the results. The objective of this study was to determine and compare the reliability of the existing Kinect-based methods to estimate spatiotemporal gait parameters in healthy and post-stroke adults. Forty-five healthy individuals and thirty-eight stroke survivors participated in this study. Participants walked five meters at a comfortable speed and their spatiotemporal gait parameters were estimated from the data retrieved by a Kinect v2, using the most common methods in the literature, and by visual inspection of the videotaped performance. Errors between both estimations were computed. For both healthy and post-stroke participants, highest accuracy was obtained when using the speed of the ankles to estimate gait speed (3.6-5.5 cm/s), stride length (2.5-5.5 cm), and stride time (about 45 ms), and when using the distance between the sacrum and the ankles and toes to estimate double support time (about 65 ms) and swing time (60-90 ms). Although the accuracy of these methods is limited, these measures could occasionally complement traditional tools. Copyright © 2018 Elsevier Ltd. All rights reserved.
Horton, G.E.; Letcher, B.H.
2008-01-01
The inability to account for the availability of individuals in the study area during capture-mark-recapture (CMR) studies and the resultant confounding of parameter estimates can make correct interpretation of CMR model parameter estimates difficult. Although important advances based on the Cormack-Jolly-Seber (CJS) model have resulted in estimators of true survival that work by unconfounding either death or recapture probability from availability for capture in the study area, these methods rely on the researcher's ability to select a method that is correctly matched to emigration patterns in the population. If incorrect assumptions regarding site fidelity (non-movement) are made, it may be difficult or impossible as well as costly to change the study design once the incorrect assumption is discovered. Subtleties in characteristics of movement (e.g. life history-dependent emigration, nomads vs territory holders) can lead to mixtures in the probability of being available for capture among members of the same population. The result of these mixtures may be only a partial unconfounding of emigration from other CMR model parameters. Biologically-based differences in individual movement can combine with constraints on study design to further complicate the problem. Because of the intricacies of movement and its interaction with other parameters in CMR models, quantification of and solutions to these problems are needed. Based on our work with stream-dwelling populations of Atlantic salmon Salmo salar, we used a simulation approach to evaluate existing CMR models under various mixtures of movement probabilities. The Barker joint data model provided unbiased estimates of true survival under all conditions tested. The CJS and robust design models provided similarly unbiased estimates of true survival but only when emigration information could be incorporated directly into individual encounter histories. For the robust design model, Markovian emigration (future availability for capture depends on an individual's current location) was a difficult emigration pattern to detect unless survival and especially recapture probability were high. Additionally, when local movement was high relative to study area boundaries and movement became more diffuse (e.g. a random walk), local movement and permanent emigration were difficult to distinguish and had consequences for correctly interpreting the survival parameter being estimated (apparent survival vs true survival). ?? 2008 The Authors.
Beda, Alessandro; Simpson, David M; Faes, Luca
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings.
2017-01-01
The growing interest in personalized medicine requires making inferences from descriptive indexes estimated from individual recordings of physiological signals, with statistical analyses focused on individual differences between/within subjects, rather than comparing supposedly homogeneous cohorts. To this end, methods to compute confidence limits of individual estimates of descriptive indexes are needed. This study introduces numerical methods to compute such confidence limits and perform statistical comparisons between indexes derived from autoregressive (AR) modeling of individual time series. Analytical approaches are generally not viable, because the indexes are usually nonlinear functions of the AR parameters. We exploit Monte Carlo (MC) and Bootstrap (BS) methods to reproduce the sampling distribution of the AR parameters and indexes computed from them. Here, these methods are implemented for spectral and information-theoretic indexes of heart-rate variability (HRV) estimated from AR models of heart-period time series. First, the MS and BC methods are tested in a wide range of synthetic HRV time series, showing good agreement with a gold-standard approach (i.e. multiple realizations of the "true" process driving the simulation). Then, real HRV time series measured from volunteers performing cognitive tasks are considered, documenting (i) the strong variability of confidence limits' width across recordings, (ii) the diversity of individual responses to the same task, and (iii) frequent disagreement between the cohort-average response and that of many individuals. We conclude that MC and BS methods are robust in estimating confidence limits of these AR-based indexes and thus recommended for short-term HRV analysis. Moreover, the strong inter-individual differences in the response to tasks shown by AR-based indexes evidence the need of individual-by-individual assessments of HRV features. Given their generality, MC and BS methods are promising for applications in biomedical signal processing and beyond, providing a powerful new tool for assessing the confidence limits of indexes estimated from individual recordings. PMID:28968394
Jonsen, Ian
2016-02-08
State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.
Michael J. Firko; Jane Leslie Hayes
1990-01-01
Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...
Parameter Estimation for Thurstone Choice Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vojnovic, Milan; Yun, Seyoung
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less
Eaton, Mitchell J.; Link, William A.
2011-01-01
Estimating the age of individuals in wild populations can be of fundamental importance for answering ecological questions, modeling population demographics, and managing exploited or threatened species. Significant effort has been devoted to determining age through the use of growth annuli, secondary physical characteristics related to age, and growth models. Many species, however, either do not exhibit physical characteristics useful for independent age validation or are too rare to justify sacrificing a large number of individuals to establish the relationship between size and age. Length-at-age models are well represented in the fisheries and other wildlife management literature. Many of these models overlook variation in growth rates of individuals and consider growth parameters as population parameters. More recent models have taken advantage of hierarchical structuring of parameters and Bayesian inference methods to allow for variation among individuals as functions of environmental covariates or individual-specific random effects. Here, we describe hierarchical models in which growth curves vary as individual-specific stochastic processes, and we show how these models can be fit using capture–recapture data for animals of unknown age along with data for animals of known age. We combine these independent data sources in a Bayesian analysis, distinguishing natural variation (among and within individuals) from measurement error. We illustrate using data for African dwarf crocodiles, comparing von Bertalanffy and logistic growth models. The analysis provides the means of predicting crocodile age, given a single measurement of head length. The von Bertalanffy was much better supported than the logistic growth model and predicted that dwarf crocodiles grow from 19.4 cm total length at birth to 32.9 cm in the first year and 45.3 cm by the end of their second year. Based on the minimum size of females observed with hatchlings, reproductive maturity was estimated to be at nine years. These size benchmarks are believed to represent thresholds for important demographic parameters; improved estimates of age, therefore, will increase the precision of population projection models. The modeling approach that we present can be applied to other species and offers significant advantages when multiple sources of data are available and traditional aging techniques are not practical.
Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies and Incidence Time Series
Li, Lucy M.; Grassly, Nicholas C.; Fraser, Christophe
2017-01-01
Abstract Heterogeneity in individual-level transmissibility can be quantified by the dispersion parameter k of the offspring distribution. Quantifying heterogeneity is important as it affects other parameter estimates, it modulates the degree of unpredictability of an epidemic, and it needs to be accounted for in models of infection control. Aggregated data such as incidence time series are often not sufficiently informative to estimate k. Incorporating phylogenetic analysis can help to estimate k concurrently with other epidemiological parameters. We have developed an inference framework that uses particle Markov Chain Monte Carlo to estimate k and other epidemiological parameters using both incidence time series and the pathogen phylogeny. Using the framework to fit a modified compartmental transmission model that includes the parameter k to simulated data, we found that more accurate and less biased estimates of the reproductive number were obtained by combining epidemiological and phylogenetic analyses. However, k was most accurately estimated using pathogen phylogeny alone. Accurately estimating k was necessary for unbiased estimates of the reproductive number, but it did not affect the accuracy of reporting probability and epidemic start date estimates. We further demonstrated that inference was possible in the presence of phylogenetic uncertainty by sampling from the posterior distribution of phylogenies. Finally, we used the inference framework to estimate transmission parameters from epidemiological and genetic data collected during a poliovirus outbreak. Despite the large degree of phylogenetic uncertainty, we demonstrated that incorporating phylogenetic data in parameter inference improved the accuracy and precision of estimates. PMID:28981709
Motor unit size estimation: confrontation of surface EMG with macro EMG.
Roeleveld, K; Stegeman, D F; Falck, B; Stålberg, E V
1997-06-01
Surface EMG (SEMG) is little used for diagnostic purposes in clinical neurophysiology, mainly because it provides little direct information on individual motor units (MUs). One of the techniques to estimate the MU size is intra-muscular Macro EMG. The present study compares SEMG with Macro EMG. Fifty-eight channel SEMG was recorded simultaneously with Macro EMG. Individual MUPs were obtained by single fiber triggered averaging. All recordings were made from the biceps brachii of healthy subjects during voluntary contraction at low force. High positive correlations were found between all Macro and Surface motor unit potential (MUP) parameters: area, peak-to-peak amplitude, negative peak amplitude and positive peak amplitude. The MUPs recorded with SEMG were dependent on the distance between the MU and the skin surface. Normalizing the SEMG parameters for MU location did not improve the correlation coefficient between the parameters of both techniques. The two measurement techniques had almost the same relative range in MUP parameters in any individual subject compared to the others, especially after normalizing the surface MUP parameters for MU location. MUPs recorded with this type of SEMG provide useful information about the MU size.
Nagelkerke, Nico; Fidler, Vaclav
2015-01-01
The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.
Stochastic Individual-Based Modeling of Bacterial Growth and Division Using Flow Cytometry.
García, Míriam R; Vázquez, José A; Teixeira, Isabel G; Alonso, Antonio A
2017-01-01
A realistic description of the variability in bacterial growth and division is critical to produce reliable predictions of safety risks along the food chain. Individual-based modeling of bacteria provides the theoretical framework to deal with this variability, but it requires information about the individual behavior of bacteria inside populations. In this work, we overcome this problem by estimating the individual behavior of bacteria from population statistics obtained with flow cytometry. For this objective, a stochastic individual-based modeling framework is defined based on standard assumptions during division and exponential growth. The unknown single-cell parameters required for running the individual-based modeling simulations, such as cell size growth rate, are estimated from the flow cytometry data. Instead of using directly the individual-based model, we make use of a modified Fokker-Plank equation. This only equation simulates the population statistics in function of the unknown single-cell parameters. We test the validity of the approach by modeling the growth and division of Pediococcus acidilactici within the exponential phase. Estimations reveal the statistics of cell growth and division using only data from flow cytometry at a given time. From the relationship between the mother and daughter volumes, we also predict that P. acidilactici divide into two successive parallel planes.
Using genetic data to estimate diffusion rates in heterogeneous landscapes.
Roques, L; Walker, E; Franck, P; Soubeyrand, S; Klein, E K
2016-08-01
Having a precise knowledge of the dispersal ability of a population in a heterogeneous environment is of critical importance in agroecology and conservation biology as it can provide management tools to limit the effects of pests or to increase the survival of endangered species. In this paper, we propose a mechanistic-statistical method to estimate space-dependent diffusion parameters of spatially-explicit models based on stochastic differential equations, using genetic data. Dividing the total population into subpopulations corresponding to different habitat patches with known allele frequencies, the expected proportions of individuals from each subpopulation at each position is computed by solving a system of reaction-diffusion equations. Modelling the capture and genotyping of the individuals with a statistical approach, we derive a numerically tractable formula for the likelihood function associated with the diffusion parameters. In a simulated environment made of three types of regions, each associated with a different diffusion coefficient, we successfully estimate the diffusion parameters with a maximum-likelihood approach. Although higher genetic differentiation among subpopulations leads to more accurate estimations, once a certain level of differentiation has been reached, the finite size of the genotyped population becomes the limiting factor for accurate estimation.
Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642
Ackleh, A.S.; Carter, J.; Deng, K.; Huang, Q.; Pal, N.; Yang, X.
2012-01-01
We derive point and interval estimates for an urban population of green tree frogs (Hyla cinerea) from capture-mark-recapture field data obtained during the years 2006-2009. We present an infinite-dimensional least-squares approach which compares a mathematical population model to the statistical population estimates obtained from the field data. The model is composed of nonlinear first-order hyperbolic equations describing the dynamics of the amphibian population where individuals are divided into juveniles (tadpoles) and adults (frogs). To solve the least-squares problem, an explicit finite difference approximation is developed. Convergence results for the computed parameters are presented. Parameter estimates for the vital rates of juveniles and adults are obtained, and standard deviations for these estimates are computed. Numerical results for the model sensitivity with respect to these parameters are given. Finally, the above-mentioned parameter estimates are used to illustrate the long-time behavior of the population under investigation. ?? 2011 Society for Mathematical Biology.
Genetic parameter estimation for long endurance trials in the Uruguayan Criollo horse.
López-Correa, R D; Peñagaricano, F; Rovere, G; Urioste, J I
2018-06-01
The aim of this study was to estimate the genetic parameters of performance in a 750-km, 15-day ride in Criollo horses. Heritability (h 2 ) and maternal lineage effects (mt 2 ) were obtained for rank, a relative placing measure of performance. Additive genetic and maternal lineage (rmt) correlations among five medium-to-high intensity phase ranks (pRK) and final rank (RK) were also estimated. Individual records from 1,236 Criollo horses from 1979 to 2012 were used. A multivariate threshold animal model was applied to the pRK and RK. Heritability was moderate to low (0.156-0.275). Estimates of mt 2 were consistently low (0.04-0.06). Additive genetic correlations between individual pRK and RK were high (0.801-0.924), and the genetic correlations between individual pRKs ranged from 0.763 to 0.847. The pRK heritabilities revealed that some phases were explained by a greater additive component, whereas others showed stronger genetic relationships with RK. Thus, not all pRK may be considered as similar measures of performance in competition. © 2018 Blackwell Verlag GmbH.
Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L
2017-01-01
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.
2010-01-01
Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals. PMID:19138877
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
A framework for scalable parameter estimation of gene circuit models using structural information.
Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin
2013-07-01
Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.
Parameter estimation and prediction for the course of a single epidemic outbreak of a plant disease.
Kleczkowski, A; Gilligan, C A
2007-10-22
Many epidemics of plant diseases are characterized by large variability among individual outbreaks. However, individual epidemics often follow a well-defined trajectory which is much more predictable in the short term than the ensemble (collection) of potential epidemics. In this paper, we introduce a modelling framework that allows us to deal with individual replicated outbreaks, based upon a Bayesian hierarchical analysis. Information about 'similar' replicate epidemics can be incorporated into a hierarchical model, allowing both ensemble and individual parameters to be estimated. The model is used to analyse the data from a replicated experiment involving spread of Rhizoctonia solani on radish in the presence or absence of a biocontrol agent, Trichoderma viride. The rate of primary (soil-to-plant) infection is found to be the most variable factor determining the final size of epidemics. Breakdown of biological control in some replicates results in high levels of primary infection and increased variability. The model can be used to predict new outbreaks of disease based upon knowledge from a 'library' of previous epidemics and partial information about the current outbreak. We show that forecasting improves significantly with knowledge about the history of a particular epidemic, whereas the precision of hindcasting to identify the past course of the epidemic is largely independent of detailed knowledge of the epidemic trajectory. The results have important consequences for parameter estimation, inference and prediction for emerging epidemic outbreaks.
Multiparameter Estimation in Networked Quantum Sensors
NASA Astrophysics Data System (ADS)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-01
We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
Zijlstra, Agnes; Zijlstra, Wiebren
2013-09-01
Inverted pendulum (IP) models of human walking allow for wearable motion-sensor based estimations of spatio-temporal gait parameters during unconstrained walking in daily-life conditions. At present it is unclear to what extent different IP based estimations yield different results, and reliability and validity have not been investigated in older persons without a specific medical condition. The aim of this study was to compare reliability and validity of four different IP based estimations of mean step length in independent-living older persons. Participants were assessed twice and walked at different speeds while wearing a tri-axial accelerometer at the lower back. For all step-length estimators, test-retest intra-class correlations approached or were above 0.90. Intra-class correlations with reference step length were above 0.92 with a mean error of 0.0 cm when (1) multiplying the estimated center-of-mass displacement during a step by an individual correction factor in a simple IP model, or (2) adding an individual constant for bipedal stance displacement to the estimated displacement during single stance in a 2-phase IP model. When applying generic corrections or constants in all subjects (i.e. multiplication by 1.25, or adding 75% of foot length), correlations were above 0.75 with a mean error of respectively 2.0 and 1.2 cm. Although the results indicate that an individual adjustment of the IP models provides better estimations of mean step length, the ease of a generic adjustment can be favored when merely evaluating intra-individual differences. Further studies should determine the validity of these IP based estimations for assessing gait in daily life. Copyright © 2013 Elsevier B.V. All rights reserved.
Wagner, Brian J.; Gorelick, Steven M.
1986-01-01
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.
Reducing bias in survival under non-random temporary emigration
Peñaloza, Claudia L.; Kendall, William L.; Langtimm, Catherine Ann
2014-01-01
Despite intensive monitoring, temporary emigration from the sampling area can induce bias severe enough for managers to discard life-history parameter estimates toward the terminus of the times series (terminal bias). Under random temporary emigration unbiased parameters can be estimated with CJS models. However, unmodeled Markovian temporary emigration causes bias in parameter estimates and an unobservable state is required to model this type of emigration. The robust design is most flexible when modeling temporary emigration, and partial solutions to mitigate bias have been identified, nonetheless there are conditions were terminal bias prevails. Long-lived species with high adult survival and highly variable non-random temporary emigration present terminal bias in survival estimates, despite being modeled with the robust design and suggested constraints. Because this bias is due to uncertainty about the fate of individuals that are undetected toward the end of the time series, solutions should involve using additional information on survival status or location of these individuals at that time. Using simulation, we evaluated the performance of models that jointly analyze robust design data and an additional source of ancillary data (predictive covariate on temporary emigration, telemetry, dead recovery, or auxiliary resightings) in reducing terminal bias in survival estimates. The auxiliary resighting and predictive covariate models reduced terminal bias the most. Additional telemetry data was effective at reducing terminal bias only when individuals were tracked for a minimum of two years. High adult survival of long-lived species made the joint model with recovery data ineffective at reducing terminal bias because of small-sample bias. The naïve constraint model (last and penultimate temporary emigration parameters made equal), was the least efficient, though still able to reduce terminal bias when compared to an unconstrained model. Joint analysis of several sources of data improved parameter estimates and reduced terminal bias. Efforts to incorporate or acquire such data should be considered by researchers and wildlife managers, especially in the years leading up to status assessments of species of interest. Simulation modeling is a very cost effective method to explore the potential impacts of using different sources of data to produce high quality demographic data to inform management.
Performance Assessment Uncertainty Analysis for Japan's HLW Program Feasibility Study (H12)
DOE Office of Scientific and Technical Information (OSTI.GOV)
BABA,T.; ISHIGURO,K.; ISHIHARA,Y.
1999-08-30
Most HLW programs in the world recognize that any estimate of long-term radiological performance must be couched in terms of the uncertainties derived from natural variation, changes through time and lack of knowledge about the essential processes. The Japan Nuclear Cycle Development Institute followed a relatively standard procedure to address two major categories of uncertainty. First, a FEatures, Events and Processes (FEPs) listing, screening and grouping activity was pursued in order to define the range of uncertainty in system processes as well as possible variations in engineering design. A reference and many alternative cases representing various groups of FEPs weremore » defined and individual numerical simulations performed for each to quantify the range of conceptual uncertainty. Second, parameter distributions were developed for the reference case to represent the uncertainty in the strength of these processes, the sequencing of activities and geometric variations. Both point estimates using high and low values for individual parameters as well as a probabilistic analysis were performed to estimate parameter uncertainty. A brief description of the conceptual model uncertainty analysis is presented. This paper focuses on presenting the details of the probabilistic parameter uncertainty assessment.« less
Thompson, Robert S.; Anderson, Katherine H.; Pelltier, Richard T.; Strickland, Laura E.; Shafer, Sarah L.; Bartlein, Patrick J.
2012-01-01
Vegetation inventories (plant taxa present in a vegetation assemblage at a given site) can be used to estimate climatic parameters based on the identification of the range of a given parameter where all taxa in an assemblage overlap ("Mutual Climatic Range"). For the reconstruction of past climates from fossil or subfossil plant assemblages, we assembled the data necessary for such analyses for 530 woody plant taxa and eight climatic parameters in North America. Here we present examples of how these data can be used to obtain paleoclimatic estimates from botanical data in a straightforward, simple, and robust fashion. We also include matrices of climate parameter versus occurrence or nonoccurrence of the individual taxa. These relations are depicted graphically as histograms of the population distributions of the occurrences of a given taxon plotted against a given climatic parameter. This provides a new method for quantification of paleoclimatic parameters from fossil plant assemblages.
USDA-ARS?s Scientific Manuscript database
To evaluate newer indirect calorimetry system to quantify energetic parameters, 8 cross-bred beef steers (initial BW = 241 ± 4.10 kg) were used in a 77-d experiment to examine energetics parameters calculated from carbon dioxide (CO2), methane (CH4), and oxygen (O2) fluxes. Steers were individually ...
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
Conn, P.B.; Kendall, W.L.; Samuel, M.D.
2004-01-01
Estimates of waterfowl demographic parameters often come from resighting studies where birds fit with individually identifiable neck collars are resighted at a distance. Concerns have been raised about the effects of collar loss on parameter estimates, and the reliability of extrapolating from collared individuals to the population. Models previously proposed to account for collar loss do not allow survival or harvest parameters to depend on neck collar presence or absence. Also, few models have incorporated recent advances in mark-recapture theory that allow for multiple states or auxiliary encounters such as band recoveries. We propose a multistate model for tag loss in which the presence or absence of a collar is considered as a state variable. In this framework, demographic parameters are corrected for tag loss and questions related to collar effects on survival and recovery rates can be addressed. Encounters of individuals between closed sampling periods also can be incorporated in the analysis. We discuss data requirements for answering questions related to tag loss and sampling designs that lend themselves to this purpose. We illustrate the application of our model using a study of lesser snow geese (Chen caerulescens caerulescens).
LAGEOS geodetic analysis-SL7.1
NASA Technical Reports Server (NTRS)
Smith, D. E.; Kolenkiewicz, R.; Dunn, P. J.; Klosko, S. M.; Robbins, J. W.; Torrence, M. H.; Williamson, R. G.; Pavlis, E. C.; Douglas, N. B.; Fricke, S. K.
1991-01-01
Laser ranging measurements to the LAGEOS satellite from 1976 through 1989 are related via geodetic and orbital theories to a variety of geodetic and geodynamic parameters. The SL7.1 analyses are explained of this data set including the estimation process for geodetic parameters such as Earth's gravitational constant (GM), those describing the Earth's elasticity properties (Love numbers), and the temporally varying geodetic parameters such as Earth's orientation (polar motion and Delta UT1) and tracking site horizontal tectonic motions. Descriptions of the reference systems, tectonic models, and adopted geodetic constants are provided; these are the framework within which the SL7.1 solution takes place. Estimates of temporal variations in non-conservative force parameters are included in these SL7.1 analyses as well as parameters describing the orbital states at monthly epochs. This information is useful in further refining models used to describe close-Earth satellite behavior. Estimates of intersite motions and individual tracking site motions computed through the network adjustment scheme are given. Tabulations of tracking site eccentricities, data summaries, estimated monthly orbital and force model parameters, polar motion, Earth rotation, and tracking station coordinate results are also provided.
NASA Astrophysics Data System (ADS)
Bloembergen, Pieter; Dong, Wei; Bai, Cheng-Yu; Wang, Tie-Jun
2011-12-01
In this paper, impurity parameters m i and k i have been calculated for a range of impurities I as detected in the eutectics Co-C and Pt-C, by means of the software package Thermo-Calc within the ternary phase spaces Co-C- I and Pt-C- I. The choice of the impurities is based upon a selection out of the results of impurity analyses performed for a representative set of samples for each of the eutectics in study. The analyses in question are glow discharge mass spectrometry (GDMS) or inductively coupled plasma mass spectrometry (ICP-mass). Tables and plots of the impurity parameters against the atomic number Z i of the impurities will be presented, as well as plots demonstrating the validity of van't Hoff's law, the cornerstone to this study, for both eutectics. For the eutectics in question, the uncertainty u( T E - T liq ) in the correction T E - T liq will be derived, where T E and T liq refer to the transition temperature of the pure system and to the liquidus temperature in the limit of zero growth rate of the solid phase during solidification of the actual system, respectively. Uncertainty estimates based upon the current scheme SIE-OME, combining the sum of individual estimates (SIE) and the overall maximum estimate (OME) are compared with two alternative schemes proposed in this paper, designated as IE-IRE, combining individual estimates (IE) and individual random estimates (IRE), and the hybrid scheme SIE-IE-IRE, combining SIE, IE, and IRE.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe; Frouin, Frederique; Garreau, Mireille
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert.
Lebenberg, Jessica; Lalande, Alain; Clarysse, Patrick; Buvat, Irene; Casta, Christopher; Cochet, Alexandre; Constantinidès, Constantin; Cousty, Jean; de Cesare, Alain; Jehan-Besson, Stephanie; Lefort, Muriel; Najman, Laurent; Roullot, Elodie; Sarry, Laurent; Tilmant, Christophe
2015-01-01
This work aimed at combining different segmentation approaches to produce a robust and accurate segmentation result. Three to five segmentation results of the left ventricle were combined using the STAPLE algorithm and the reliability of the resulting segmentation was evaluated in comparison with the result of each individual segmentation method. This comparison was performed using a supervised approach based on a reference method. Then, we used an unsupervised statistical evaluation, the extended Regression Without Truth (eRWT) that ranks different methods according to their accuracy in estimating a specific biomarker in a population. The segmentation accuracy was evaluated by estimating six cardiac function parameters resulting from the left ventricle contour delineation using a public cardiac cine MRI database. Eight different segmentation methods, including three expert delineations and five automated methods, were considered, and sixteen combinations of the automated methods using STAPLE were investigated. The supervised and unsupervised evaluations demonstrated that in most cases, STAPLE results provided better estimates than individual automated segmentation methods. Overall, combining different automated segmentation methods improved the reliability of the segmentation result compared to that obtained using an individual method and could achieve the accuracy of an expert. PMID:26287691
Estimating Marginal Returns to Education. NBER Working Paper No. 16474
ERIC Educational Resources Information Center
Carneiro, Pedro; Heckman, James J.; Vytlacil, Edward J.
2010-01-01
This paper estimates the marginal returns to college for individuals induced to enroll in college by different marginal policy changes. The recent instrumental variables literature seeks to estimate this parameter, but in general it does so only under strong assumptions that are tested and found wanting. We show how to utilize economic theory and…
PVWatts Version 1 Technical Reference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobos, A. P.
2013-10-01
The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
Pooseh, Shakoor; Bernhardt, Nadine; Guevara, Alvaro; Huys, Quentin J M; Smolka, Michael N
2018-02-01
Using simple mathematical models of choice behavior, we present a Bayesian adaptive algorithm to assess measures of impulsive and risky decision making. Practically, these measures are characterized by discounting rates and are used to classify individuals or population groups, to distinguish unhealthy behavior, and to predict developmental courses. However, a constant demand for improved tools to assess these constructs remains unanswered. The algorithm is based on trial-by-trial observations. At each step, a choice is made between immediate (certain) and delayed (risky) options. Then the current parameter estimates are updated by the likelihood of observing the choice, and the next offers are provided from the indifference point, so that they will acquire the most informative data based on the current parameter estimates. The procedure continues for a certain number of trials in order to reach a stable estimation. The algorithm is discussed in detail for the delay discounting case, and results from decision making under risk for gains, losses, and mixed prospects are also provided. Simulated experiments using prescribed parameter values were performed to justify the algorithm in terms of the reproducibility of its parameters for individual assessments, and to test the reliability of the estimation procedure in a group-level analysis. The algorithm was implemented as an experimental battery to measure temporal and probability discounting rates together with loss aversion, and was tested on a healthy participant sample.
Lord, Dominique; Park, Peter Young-Jin
2008-07-01
Traditionally, transportation safety analysts have used the empirical Bayes (EB) method to improve the estimate of the long-term mean of individual sites; to correct for the regression-to-the-mean (RTM) bias in before-after studies; and to identify hotspots or high risk locations. The EB method combines two different sources of information: (1) the expected number of crashes estimated via crash prediction models, and (2) the observed number of crashes at individual sites. Crash prediction models have traditionally been estimated using a negative binomial (NB) (or Poisson-gamma) modeling framework due to the over-dispersion commonly found in crash data. A weight factor is used to assign the relative influence of each source of information on the EB estimate. This factor is estimated using the mean and variance functions of the NB model. With recent trends that illustrated the dispersion parameter to be dependent upon the covariates of NB models, especially for traffic flow-only models, as well as varying as a function of different time-periods, there is a need to determine how these models may affect EB estimates. The objectives of this study are to examine how commonly used functional forms as well as fixed and time-varying dispersion parameters affect the EB estimates. To accomplish the study objectives, several traffic flow-only crash prediction models were estimated using a sample of rural three-legged intersections located in California. Two types of aggregated and time-specific models were produced: (1) the traditional NB model with a fixed dispersion parameter and (2) the generalized NB model (GNB) with a time-varying dispersion parameter, which is also dependent upon the covariates of the model. Several statistical methods were used to compare the fitting performance of the various functional forms. The results of the study show that the selection of the functional form of NB models has an important effect on EB estimates both in terms of estimated values, weight factors, and dispersion parameters. Time-specific models with a varying dispersion parameter provide better statistical performance in terms of goodness-of-fit (GOF) than aggregated multi-year models. Furthermore, the identification of hazardous sites, using the EB method, can be significantly affected when a GNB model with a time-varying dispersion parameter is used. Thus, erroneously selecting a functional form may lead to select the wrong sites for treatment. The study concludes that transportation safety analysts should not automatically use an existing functional form for modeling motor vehicle crashes without conducting rigorous analyses to estimate the most appropriate functional form linking crashes with traffic flow.
Estimating thermal performance curves from repeated field observations
Childress, Evan; Letcher, Benjamin H.
2017-01-01
Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.
Néant, Nadège; Gattacceca, Florence; Lê, Minh Patrick; Yazdanpanah, Yazdan; Dhiver, Catherine; Bregigeon, Sylvie; Mokhtari, Saadia; Peytavin, Gilles; Tamalet, Catherine; Descamps, Diane; Lacarelle, Bruno; Solas, Caroline
2018-04-01
Rilpivirine, prescribed for the treatment of HIV infection, presents an important inter-individual pharmacokinetic variability. We aimed to determine population pharmacokinetic parameters of rilpivirine in adult HIV-infected patients and quantify their inter-individual variability. We conducted a multicenter, retrospective, and observational study in patients treated with the once-daily rilpivirine/tenofovir disoproxil fumarate/emtricitabine regimen. As part of routine therapeutic drug monitoring, rilpivirine concentrations were measured by UPLC-MS/MS. Population pharmacokinetic analysis was performed using NONMEM software. Once the compartmental and random effects models were selected, covariates were tested to explain the inter-individual variability in pharmacokinetic parameters. The final model qualification was performed by both statistical and graphical methods. We included 379 patients, resulting in the analysis of 779 rilpivirine plasma concentrations. Of the observed trough individual plasma concentrations, 24.4% were below the 50 ng/ml minimal effective concentration. A one-compartment model with first-order absorption best described the data. The estimated fixed effect for plasma apparent clearance and distribution volume were 9 L/h and 321 L, respectively, resulting in a half-life of 25.2 h. The common inter-individual variability for both parameters was 34.1% at both the first and the second occasions. The inter-individual variability of clearance was 30.3%. Our results showed a terminal half-life lower than reported and a high proportion of patients with suboptimal rilpivirine concentrations, which highlights the interest of using therapeutic drug monitoring in clinical practice. The population analysis performed with data from "real-life" conditions resulted in reliable post hoc estimates of pharmacokinetic parameters, suitable for individualization of dosing regimen.
Avanasi, Raghavendhran; Shin, Hyeong-Moo; Vieira, Veronica M; Bartell, Scott M
2016-04-01
We recently utilized a suite of environmental fate and transport models and an integrated exposure and pharmacokinetic model to estimate individual perfluorooctanoate (PFOA) serum concentrations, and also assessed the association of those concentrations with preeclampsia for participants in the C8 Health Project (a cross-sectional study of over 69,000 people who were environmentally exposed to PFOA near a major U.S. fluoropolymer production facility located in West Virginia). However, the exposure estimates from this integrated model relied on default values for key independent exposure parameters including water ingestion rates, the serum PFOA half-life, and the volume of distribution for PFOA. The aim of the present study is to assess the impact of inter-individual variability and epistemic uncertainty in these parameters on the exposure estimates and subsequently, the epidemiological association between PFOA exposure and preeclampsia. We used Monte Carlo simulation to propagate inter-individual variability/epistemic uncertainty in the exposure assessment and reanalyzed the epidemiological association. Inter-individual variability in these parameters mildly impacted the serum PFOA concentration predictions (the lowest mean rank correlation between the estimated serum concentrations in our study and the original predicted serum concentrations was 0.95) and there was a negligible impact on the epidemiological association with preeclampsia (no change in the mean adjusted odds ratio (AOR) and the contribution of exposure uncertainty to the total uncertainty including sampling variability was 7%). However, when epistemic uncertainty was added along with the inter-individual variability, serum PFOA concentration predictions and their association with preeclampsia were moderately impacted (the mean AOR of preeclampsia occurrence was reduced from 1.12 to 1.09, and the contribution of exposure uncertainty to the total uncertainty was increased up to 33%). In conclusion, our study shows that the change of the rank exposure among the study participants due to variability and epistemic uncertainty in the independent exposure parameters was large enough to cause a 25% bias towards the null. This suggests that the true AOR of the association between PFOA and preeclampsia in this population might be higher than the originally reported AOR and has more uncertainty than indicated by the originally reported confidence interval. Copyright © 2016 Elsevier Inc. All rights reserved.
Age and growth parameters of shark-like batoids.
White, J; Simpfendorfer, C A; Tobin, A J; Heupel, M R
2014-05-01
Estimates of life-history parameters were made for shark-like batoids of conservation concern Rhynchobatus spp. (Rhynchobatus australiae, Rhynchobatus laevis and Rhynchobatus palpebratus) and Glaucostegus typus using vertebral ageing. The sigmoid growth functions, Gompertz and logistic, best described the growth of Rhynchobatus spp. and G. typus, providing the best statistical fit and most biologically appropriate parameters. The two-parameter logistic was the preferred model for Rhynchobatus spp. with growth parameter estimates (both sexes combined) L(∞) = 2045 mm stretch total length, LST and k = 0·41 year⁻¹. The same model was also preferred for G. typus with growth parameter estimates (both sexes combined) L∞ = 2770 mm LST and k = 0·30 year⁻¹. Annual growth-band deposition could not be excluded in Rhynchobatus spp. using mark-recaptured individuals. Although morphologically similar G. typus and Rhynchobatus spp. have differing life histories, with G. typus longer lived, slower growing and attaining a larger maximum size. © 2014 The Fisheries Society of the British Isles.
Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J
2012-07-01
1. We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2. The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3. An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4. We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.
Kim, Kiyeon; Omori, Ryosuke; Ito, Kimihito
2017-12-01
The estimation of the basic reproduction number is essential to understand epidemic dynamics, and time series data of infected individuals are usually used for the estimation. However, such data are not always available. Methods to estimate the basic reproduction number using genealogy constructed from nucleotide sequences of pathogens have been proposed so far. Here, we propose a new method to estimate epidemiological parameters of outbreaks using the time series change of Tajima's D statistic on the nucleotide sequences of pathogens. To relate the time evolution of Tajima's D to the number of infected individuals, we constructed a parsimonious mathematical model describing both the transmission process of pathogens among hosts and the evolutionary process of the pathogens. As a case study we applied this method to the field data of nucleotide sequences of pandemic influenza A (H1N1) 2009 viruses collected in Argentina. The Tajima's D-based method estimated basic reproduction number to be 1.55 with 95% highest posterior density (HPD) between 1.31 and 2.05, and the date of epidemic peak to be 10th July with 95% HPD between 22nd June and 9th August. The estimated basic reproduction number was consistent with estimation by birth-death skyline plot and estimation using the time series of the number of infected individuals. These results suggested that Tajima's D statistic on nucleotide sequences of pathogens could be useful to estimate epidemiological parameters of outbreaks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Modeling individual effects in the Cormack-Jolly-Seber Model: A state-space formulation
Royle, J. Andrew
2008-01-01
In population and evolutionary biology, there exists considerable interest in individual heterogeneity in parameters of demographic models for open populations. However, flexible and practical solutions to the development of such models have proven to be elusive. In this article, I provide a state-space formulation of open population capture-recapture models with individual effects. The state-space formulation provides a generic and flexible framework for modeling and inference in models with individual effects, and it yields a practical means of estimation in these complex problems via contemporary methods of Markov chain Monte Carlo. A straightforward implementation can be achieved in the software package WinBUGS. I provide an analysis of a simple model with constant parameter detection and survival probability parameters. A second example is based on data from a 7-year study of European dippers, in which a model with year and individual effects is fitted.
Multiparameter Estimation in Networked Quantum Sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
Multiparameter Estimation in Networked Quantum Sensors
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-21
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
Wildhaber, Mark L.; Albers, Janice; Green, Nicholas; Moran, Edward H.
2017-01-01
We develop a fully-stochasticized, age-structured population model suitable for population viability analysis (PVA) of fish and demonstrate its use with the endangered pallid sturgeon (Scaphirhynchus albus) of the Lower Missouri River as an example. The model incorporates three levels of variance: parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level, temporal variance (uncertainty caused by random environmental fluctuations over time) applied at the time-step level, and implicit individual variance (uncertainty caused by differences between individuals) applied within the time-step level. We found that population dynamics were most sensitive to survival rates, particularly age-2+ survival, and to fecundity-at-length. The inclusion of variance (unpartitioned or partitioned), stocking, or both generally decreased the influence of individual parameters on population growth rate. The partitioning of variance into parameter and temporal components had a strong influence on the importance of individual parameters, uncertainty of model predictions, and quasiextinction risk (i.e., pallid sturgeon population size falling below 50 age-1+ individuals). Our findings show that appropriately applying variance in PVA is important when evaluating the relative importance of parameters, and reinforce the need for better and more precise estimates of crucial life-history parameters for pallid sturgeon.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
Analysis of life tables with grouping and withdrawals.
Lindley, D V
1979-09-01
A number of individuals is observed at the beginning of a period. At the end of the period the number is surviving, the number who have died and the number who have withdrawn are noted. From these three numbers it is required to estimate the death rate for the period. All relevant quantities are supposed independent and identically distributed for the individuals. The likelihood is calculated and found to depend on two parameters, other than the death rate, and to be unidenttifiable so that no consistent estimators exist. For large numbers, the posterior distribution of the death rate is approximated by a normal distribution whose mean is the root of a quadratic equation and whose variance is the sum of two terms; the first is proportional to the reciprocal of the number of individuals, as usually happens with a consistent estimator; the second does not tend to zero and depends on initial opinions about one of the nuisance parameters. The paper is a simple exercise in the routine use of coherent, Bayesian methodology. Numerical calucations illustrate the results.
van der Velde-Koerts, Trijntje; Breysse, Nicolas; Pattingre, Lauriane; Hamey, Paul Y; Lutze, Jason; Mahieu, Karin; Margerison, Sam; Ossendorp, Bernadette C; Reich, Hermine; Rietveld, Anton; Sarda, Xavier; Vial, Gaelle; Sieke, Christian
2018-06-03
In 2015 a scientific workshop was held in Geneva, where updating the International Estimate of Short-Term Intake (IESTI) equations was suggested. This paper studies the effects of the proposed changes in residue inputs, large portions, variability factors and unit weights on the overall short-term dietary exposure estimate. Depending on the IESTI case equation, a median increase in estimated overall exposure by a factor of 1.0-6.8 was observed when the current IESTI equations are replaced by the proposed IESTI equations. The highest increase in the estimated exposure arises from the replacement of the median residue (STMR) by the maximum residue limit (MRL) for bulked and blended commodities (case 3 equations). The change in large portion parameter does not have a significant impact on the estimated exposure. The use of large portions derived from the general population covering all age groups and bodyweights should be avoided when large portions are not expressed on an individual bodyweight basis. Replacement of the highest residue (HR) by the MRL and removal of the unit weight each increase the estimated exposure for small-, medium- and large-sized commodities (case 1, case 2a or case 2b equations). However, within the EU framework lowering of the variability factor from 7 or 5 to 3 counterbalances the effect of changes in other parameters, resulting in an estimated overall exposure change for the EU situation of a factor of 0.87-1.7 and 0.6-1.4 for IESTI case 2a and case 2b equations, respectively.
Real-time individualization of the unified model of performance.
Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques
2017-12-01
Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context.
Herrero-Huerta, Mónica; Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees.
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context
Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees. PMID:29689076
Astrocytic tracer dynamics estimated from [1-¹¹C]-acetate PET measurements.
Arnold, Andrea; Calvetti, Daniela; Gjedde, Albert; Iversen, Peter; Somersalo, Erkki
2015-12-01
We address the problem of estimating the unknown parameters of a model of tracer kinetics from sequences of positron emission tomography (PET) scan data using a statistical sequential algorithm for the inference of magnitudes of dynamic parameters. The method, based on Bayesian statistical inference, is a modification of a recently proposed particle filtering and sequential Monte Carlo algorithm, where instead of preassigning the accuracy in the propagation of each particle, we fix the time step and account for the numerical errors in the innovation term. We apply the algorithm to PET images of [1-¹¹C]-acetate-derived tracer accumulation, estimating the transport rates in a three-compartment model of astrocytic uptake and metabolism of the tracer for a cohort of 18 volunteers from 3 groups, corresponding to healthy control individuals, cirrhotic liver and hepatic encephalopathy patients. The distribution of the parameters for the individuals and for the groups presented within the Bayesian framework support the hypothesis that the parameters for the hepatic encephalopathy group follow a significantly different distribution than the other two groups. The biological implications of the findings are also discussed. © The Authors 2014. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Wicke, Jason; Dumas, Genevieve A
2010-02-01
The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).
Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin
2010-01-01
Nonlinear mixed-modeling methods were used to estimate parameters in an individual-tree basal area growth model for shortleaf pine (Pinus echinata Mill.). Shortleaf pine individual-tree growth data were available from over 200 permanently established 0.2-acre fixed-radius plots located in naturally-occurring even-aged shortleaf pine forests on the...
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Waniewski, Jacek; Antosiewicz, Stefan; Baczynski, Daniel; Poleszczuk, Jan; Pietribiasi, Mauro; Lindholm, Bengt; Wankowicz, Zofia
2016-01-01
During peritoneal dialysis (PD), the peritoneal membrane undergoes ageing processes that affect its function. Here we analyzed associations of patient age and dialysis vintage with parameters of peritoneal transport of fluid and solutes, directly measured and estimated based on the pore model, for individual patients. Thirty-three patients (15 females; age 60 (21-87) years; median time on PD 19 (3-100) months) underwent sequential peritoneal equilibration test. Dialysis vintage and patient age did not correlate. Estimation of parameters of the two-pore model of peritoneal transport was performed. The estimated fluid transport parameters, including hydraulic permeability (LpS), fraction of ultrasmall pores (α u), osmotic conductance for glucose (OCG), and peritoneal absorption, were generally independent of solute transport parameters (diffusive mass transport parameters). Fluid transport parameters correlated whereas transport parameters for small solutes and proteins did not correlate with dialysis vintage and patient age. Although LpS and OCG were lower for older patients and those with long dialysis vintage, αu was higher. Thus, fluid transport parameters--rather than solute transport parameters--are linked to dialysis vintage and patient age and should therefore be included when monitoring processes linked to ageing of the peritoneal membrane.
Peng, Mei; Jaeger, Sara R; Hautus, Michael J
2014-03-01
Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.
Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects
Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.
2015-01-01
The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135
Competing risks regression for clustered data
Zhou, Bingqing; Fine, Jason; Latouche, Aurelien; Labopin, Myriam
2012-01-01
A population average regression model is proposed to assess the marginal effects of covariates on the cumulative incidence function when there is dependence across individuals within a cluster in the competing risks setting. This method extends the Fine–Gray proportional hazards model for the subdistribution to situations, where individuals within a cluster may be correlated due to unobserved shared factors. Estimators of the regression parameters in the marginal model are developed under an independence working assumption where the correlation across individuals within a cluster is completely unspecified. The estimators are consistent and asymptotically normal, and variance estimation may be achieved without specifying the form of the dependence across individuals. A simulation study evidences that the inferential procedures perform well with realistic sample sizes. The practical utility of the methods is illustrated with data from the European Bone Marrow Transplant Registry. PMID:22045910
Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng
2015-01-01
Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158
Individualized estimation of human core body temperature using noninvasive measurements.
Laxminarayan, Srinivas; Rakesh, Vineet; Oyama, Tatsuya; Kazman, Josh B; Yanovich, Ran; Ketko, Itay; Epstein, Yoram; Morrison, Shawnda; Reifman, Jaques
2018-06-01
A rising core body temperature (T c ) during strenuous physical activity is a leading indicator of heat-injury risk. Hence, a system that can estimate T c in real time and provide early warning of an impending temperature rise may enable proactive interventions to reduce the risk of heat injuries. However, real-time field assessment of T c requires impractical invasive technologies. To address this problem, we developed a mathematical model that describes the relationships between T c and noninvasive measurements of an individual's physical activity, heart rate, and skin temperature, and two environmental variables (ambient temperature and relative humidity). A Kalman filter adapts the model parameters to each individual and provides real-time personalized T c estimates. Using data from three distinct studies, comprising 166 subjects who performed treadmill and cycle ergometer tasks under different experimental conditions, we assessed model performance via the root mean squared error (RMSE). The individualized model yielded an overall average RMSE of 0.33 (SD = 0.18)°C, allowing us to reach the same conclusions in each study as those obtained using the T c measurements. Furthermore, for 22 unique subjects whose T c exceeded 38.5°C, a potential lower T c limit of clinical relevance, the average RMSE decreased to 0.25 (SD = 0.20)°C. Importantly, these results remained robust in the presence of simulated real-world operational conditions, yielding no more than 16% worse RMSEs when measurements were missing (40%) or laden with added noise. Hence, the individualized model provides a practical means to develop an early warning system for reducing heat-injury risk. NEW & NOTEWORTHY A model that uses an individual's noninvasive measurements and environmental variables can continually "learn" the individual's heat-stress response by automatically adapting the model parameters on the fly to provide real-time individualized core body temperature estimates. This individualized model can replace impractical invasive sensors, serving as a practical and effective surrogate for core temperature monitoring.
USDA-ARS?s Scientific Manuscript database
Ecophysiological crop models encode intra-species behaviors using parameters that are presumed to summarize genotypic properties of individual lines or cultivars. These genotype-specific parameters (GSP’s) can be interpreted as quantitative traits that can be mapped or otherwise analyzed, as are mor...
Thompson, Robin N.; Gilligan, Christopher A.; Cunniffe, Nik J.
2016-01-01
We assess how presymptomatic infection affects predictability of infectious disease epidemics. We focus on whether or not a major outbreak (i.e. an epidemic that will go on to infect a large number of individuals) can be predicted reliably soon after initial cases of disease have appeared within a population. For emerging epidemics, significant time and effort is spent recording symptomatic cases. Scientific attention has often focused on improving statistical methodologies to estimate disease transmission parameters from these data. Here we show that, even if symptomatic cases are recorded perfectly, and disease spread parameters are estimated exactly, it is impossible to estimate the probability of a major outbreak without ambiguity. Our results therefore provide an upper bound on the accuracy of forecasts of major outbreaks that are constructed using data on symptomatic cases alone. Accurate prediction of whether or not an epidemic will occur requires records of symptomatic individuals to be supplemented with data concerning the true infection status of apparently uninfected individuals. To forecast likely future behavior in the earliest stages of an emerging outbreak, it is therefore vital to develop and deploy accurate diagnostic tests that can determine whether asymptomatic individuals are actually uninfected, or instead are infected but just do not yet show detectable symptoms. PMID:27046030
Optimal Bandwidth for Multitaper Spectrum Estimation
Haley, Charlotte L.; Anitescu, Mihai
2017-07-04
A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less
Spatially-explicit estimation of Wright's neighborhood size in continuous populations
Andrew J. Shirk; Samuel A. Cushman
2014-01-01
Effective population size (Ne) is an important parameter in conservation genetics because it quantifies a population's capacity to resist loss of genetic diversity due to inbreeding and drift. The classical approach to estimate Ne from genetic data involves grouping sampled individuals into discretely defined subpopulations assumed to be panmictic. Importantly,...
Parker, Maximilian G; Tyson, Sarah F; Weightman, Andrew P; Abbott, Bruce; Emsley, Richard; Mansell, Warren
2017-11-01
Computational models that simulate individuals' movements in pursuit-tracking tasks have been used to elucidate mechanisms of human motor control. Whilst there is evidence that individuals demonstrate idiosyncratic control-tracking strategies, it remains unclear whether models can be sensitive to these idiosyncrasies. Perceptual control theory (PCT) provides a unique model architecture with an internally set reference value parameter, and can be optimized to fit an individual's tracking behavior. The current study investigated whether PCT models could show temporal stability and individual specificity over time. Twenty adults completed three blocks of 15 1-min, pursuit-tracking trials. Two blocks (training and post-training) were completed in one session and the third was completed after 1 week (follow-up). The target moved in a one-dimensional, pseudorandom pattern. PCT models were optimized to the training data using a least-mean-squares algorithm, and validated with data from post-training and follow-up. We found significant inter-individual variability (partial η 2 : .464-.697) and intra-individual consistency (Cronbach's α: .880-.976) in parameter estimates. Polynomial regression revealed that all model parameters, including the reference value parameter, contribute to simulation accuracy. Participants' tracking performances were significantly more accurately simulated by models developed from their own tracking data than by models developed from other participants' data. We conclude that PCT models can be optimized to simulate the performance of an individual and that the test-retest reliability of individual models is a necessary criterion for evaluating computational models of human performance.
Modeling structured population dynamics using data from unmarked individuals
Grant, Evan H. Campbell; Zipkin, Elise; Thorson, James T.; See, Kevin; Lynch, Heather J.; Kanno, Yoichiro; Chandler, Richard; Letcher, Benjamin H.; Royle, J. Andrew
2014-01-01
The study of population dynamics requires unbiased, precise estimates of abundance and vital rates that account for the demographic structure inherent in all wildlife and plant populations. Traditionally, these estimates have only been available through approaches that rely on intensive mark–recapture data. We extended recently developed N-mixture models to demonstrate how demographic parameters and abundance can be estimated for structured populations using only stage-structured count data. Our modeling framework can be used to make reliable inferences on abundance as well as recruitment, immigration, stage-specific survival, and detection rates during sampling. We present a range of simulations to illustrate the data requirements, including the number of years and locations necessary for accurate and precise parameter estimates. We apply our modeling framework to a population of northern dusky salamanders (Desmognathus fuscus) in the mid-Atlantic region (USA) and find that the population is unexpectedly declining. Our approach represents a valuable advance in the estimation of population dynamics using multistate data from unmarked individuals and should additionally be useful in the development of integrated models that combine data from intensive (e.g., mark–recapture) and extensive (e.g., counts) data sources.
Individual tree diameter, height, and volume functions for longleaf pine
Carlos A. Gonzalez-Benecke; Salvador A. Gezan; Timothy A. Martin; Wendell P. Cropper; Lisa J. Samuelson; Daniel J. Leduc
2014-01-01
Currently, little information is available to estimate individual tree attributes for longleaf pine (Pinus palustris Mill.), an important tree species of the southeastern United States. The majority of available models are local, relying on stem diameter outside bark at breast height (dbh, cm) and not including stand-level parameters. We developed...
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Hierarchical models and the analysis of bird survey information
Sauer, J.R.; Link, W.A.
2003-01-01
Management of birds often requires analysis of collections of estimates. We describe a hierarchical modeling approach to the analysis of these data, in which parameters associated with the individual species estimates are treated as random variables, and probability statements are made about the species parameters conditioned on the data. A Markov-Chain Monte Carlo (MCMC) procedure is used to fit the hierarchical model. This approach is computer intensive, and is based upon simulation. MCMC allows for estimation both of parameters and of derived statistics. To illustrate the application of this method, we use the case in which we are interested in attributes of a collection of estimates of population change. Using data for 28 species of grassland-breeding birds from the North American Breeding Bird Survey, we estimate the number of species with increasing populations, provide precision-adjusted rankings of species trends, and describe a measure of population stability as the probability that the trend for a species is within a certain interval. Hierarchical models can be applied to a variety of bird survey applications, and we are investigating their use in estimation of population change from survey data.
NASA Technical Reports Server (NTRS)
Ratnayake, Nalin A.; Waggoner, Erin R.; Taylor, Brian R.
2011-01-01
The problem of parameter estimation on hybrid-wing-body aircraft is complicated by the fact that many design candidates for such aircraft involve a large number of aerodynamic control effectors that act in coplanar motion. This adds to the complexity already present in the parameter estimation problem for any aircraft with a closed-loop control system. Decorrelation of flight and simulation data must be performed in order to ascertain individual surface derivatives with any sort of mathematical confidence. Non-standard control surface configurations, such as clamshell surfaces and drag-rudder modes, further complicate the modeling task. In this paper, time-decorrelation techniques are applied to a model structure selected through stepwise regression for simulated and flight-generated lateral-directional parameter estimation data. A virtual effector model that uses mathematical abstractions to describe the multi-axis effects of clamshell surfaces is developed and applied. Comparisons are made between time history reconstructions and observed data in order to assess the accuracy of the regression model. The Cram r-Rao lower bounds of the estimated parameters are used to assess the uncertainty of the regression model relative to alternative models. Stepwise regression was found to be a useful technique for lateral-directional model design for hybrid-wing-body aircraft, as suggested by available flight data. Based on the results of this study, linear regression parameter estimation methods using abstracted effectors are expected to perform well for hybrid-wing-body aircraft properly equipped for the task.
The Relationship Between School Holidays and Transmission of Influenza in England and Wales
Jackson, Charlotte; Vynnycky, Emilia; Mangtani, Punam
2016-01-01
Abstract School closure is often considered as an influenza control measure, but its effects on transmission are poorly understood. We used 2 approaches to estimate how school holidays affect the contact parameter (the per capita rate of contact sufficient for infection transmission) for influenza using primary care data from England and Wales (1967–2000). Firstly, we fitted an age-structured susceptible-infectious-recovered model to each year's data to estimate the proportional change in the contact parameter during school holidays as compared with termtime. Secondly, we calculated the percentage difference in the contact parameter between holidays and termtime from weekly values of the contact parameter, estimated directly from simple mass-action models. Estimates were combined using random-effects meta-analysis, where appropriate. From fitting to the data, the difference in the contact parameter among children aged 5–14 years during holidays as compared with termtime ranged from a 36% reduction to a 17% increase; estimates were too heterogeneous for meta-analysis. Based on the simple mass-action model, the contact parameter was 17% (95% confidence interval: 10, 25) lower during holidays than during termtime. Results were robust to the assumed proportions of infections that were reported and individuals who were susceptible when the influenza season started. We conclude that school closure may reduce transmission during influenza outbreaks. PMID:27744384
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE)
Boker, Steven M.; Brick, Timothy R.; Pritikin, Joshua N.; Wang, Yang; von Oertzen, Timo; Brown, Donald; Lach, John; Estabrook, Ryne; Hunter, Michael D.; Maes, Hermine H.; Neale, Michael C.
2015-01-01
Maintained Individual Data Distributed Likelihood Estimation (MIDDLE) is a novel paradigm for research in the behavioral, social, and health sciences. The MIDDLE approach is based on the seemingly-impossible idea that data can be privately maintained by participants and never revealed to researchers, while still enabling statistical models to be fit and scientific hypotheses tested. MIDDLE rests on the assumption that participant data should belong to, be controlled by, and remain in the possession of the participants themselves. Distributed likelihood estimation refers to fitting statistical models by sending an objective function and vector of parameters to each participants’ personal device (e.g., smartphone, tablet, computer), where the likelihood of that individual’s data is calculated locally. Only the likelihood value is returned to the central optimizer. The optimizer aggregates likelihood values from responding participants and chooses new vectors of parameters until the model converges. A MIDDLE study provides significantly greater privacy for participants, automatic management of opt-in and opt-out consent, lower cost for the researcher and funding institute, and faster determination of results. Furthermore, if a participant opts into several studies simultaneously and opts into data sharing, these studies automatically have access to individual-level longitudinal data linked across all studies. PMID:26717128
SCoPE: an efficient method of Cosmological Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Santanu; Souradeep, Tarun, E-mail: santanud@iucaa.ernet.in, E-mail: tarun@iucaa.ernet.in
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of themore » chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.« less
Using pairs of physiological models to estimate temporal variation in amphibian body temperature.
Roznik, Elizabeth A; Alford, Ross A
2014-10-01
Physical models are often used to estimate ectotherm body temperatures, but designing accurate models for amphibians is difficult because they can vary in cutaneous resistance to evaporative water loss. To account for this variability, a recently published technique requires a pair of agar models that mimic amphibians with 0% and 100% resistance to evaporative water loss; the temperatures of these models define the lower and upper boundaries of possible amphibian body temperatures for the location in which they are placed. The goal of our study was to develop a method for using these pairs of models to estimate parameters describing the distributions of body temperatures of frogs under field conditions. We radiotracked green-eyed treefrogs (Litoria serrata) and collected semi-continuous thermal data using both temperature-sensitive radiotransmitters with an automated datalogging receiver, and pairs of agar models placed in frog locations, and we collected discrete thermal data using a non-contact infrared thermometer when frogs were located. We first examined the accuracy of temperature-sensitive transmitters in estimating frog body temperatures by comparing transmitter data with direct temperature measurements taken simultaneously for the same individuals. We then compared parameters (mean, minimum, maximum, standard deviation) characterizing the distributions of temperatures of individual frogs estimated from data collected using each of the three methods. We found strong relationships between thermal parameters estimated from data collected using automated radiotelemetry and both types of thermal models. These relationships were stronger for data collected using automated radiotelemetry and impermeable thermal models, suggesting that in the field, L. serrata has a relatively high resistance to evaporative water loss. Our results demonstrate that placing pairs of thermal models in frog locations can provide accurate estimates of the distributions of temperatures experienced by individual frogs, and that comparing temperatures from model pairs to direct measurements collected simultaneously on frogs can be used to broadly characterize the skin resistance of a species, and to select which model type is most appropriate for estimating temperature distributions for that species. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ladtap XL Version 2017: A Spreadsheet For Estimating Dose Resulting From Aqueous Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minter, K.; Jannik, T.
LADTAP XL© is an EXCEL© spreadsheet used to estimate dose to offsite individuals and populations resulting from routine and accidental releases of radioactive materials to the Savannah River. LADTAP XL© contains two worksheets: LADTAP and IRRIDOSE. The LADTAP worksheet estimates dose for environmental pathways including external exposure resulting from recreational activities on the Savannah River and internal exposure resulting from ingestion of water, fish, and invertebrates originating from the Savannah River. IRRIDOSE estimates offsite dose to individuals and populations from irrigation of foodstuffs with contaminated water from the Savannah River. In 2004, a complete description of the LADTAP XL© codemore » and an associated user’s manual was documented in LADTAP XL©: A Spreadsheet for Estimating Dose Resulting from Aqueous Release (WSRC-TR-2004-00059) and revised input parameters, dose coefficients, and radionuclide decay constants were incorporated into LADTAP XL© Version 2013 (SRNL-STI-2011-00238). LADTAP XL© Version 2017 is a slight modification to Version 2013 with minor changes made for more user-friendly parameter inputs and organization, updates in the time conversion factors used within the dose calculations, and fixed an issue with the expected time build-up parameter referenced within the population shoreline dose calculations. This manual has been produced to update the code description, verification of the models, and provide an updated user’s manual. LADTAP XL© Version 2017 has been verified by Minter (2017) and is ready for use at the Savannah River Site (SRS).« less
Estimating parameters from rotating ring disc electrode measurements
Santhanagopalan, Shriram; White, Ralph E.
2017-10-21
Rotating ring disc electrode (RRDE) experiments are a classic tool for investigating kinetics of electrochemical reactions. Several standardized methods exist for extracting transport parameters and reaction rate constants using RRDE measurements. Here in this work, we compare some approximate solutions to the convective diffusion used popularly in the literature to a rigorous numerical solution of the Nernst-Planck equations coupled to the three dimensional flow problem. In light of these computational advancements, we explore design aspects of the RRDE that will help improve sensitivity of our parameter estimation procedure to experimental data. We use the oxygen reduction in acidic media involvingmore » three charge transfer reactions and a chemical reaction as an example, and identify ways to isolate reaction currents for the individual processes in order to accurately estimate the exchange current densities.« less
Taylor, Zeike A; Kirk, Thomas B; Miller, Karol
2007-10-01
The theoretical framework developed in a companion paper (Part I) is used to derive estimates of mechanical response of two meniscal cartilage specimens. The previously developed framework consisted of a constitutive model capable of incorporating confocal image-derived tissue microstructural data. In the present paper (Part II) fibre and matrix constitutive parameters are first estimated from mechanical testing of a batch of specimens similar to, but independent from those under consideration. Image analysis techniques which allow estimation of tissue microstructural parameters form confocal images are presented. The constitutive model and image-derived structural parameters are then used to predict the reaction force history of the two meniscal specimens subjected to partially confined compression. The predictions are made on the basis of the specimens' individual structural condition as assessed by confocal microscopy and involve no tuning of material parameters. Although the model does not reproduce all features of the experimental curves, as an unfitted estimate of mechanical response the prediction is quite accurate. In light of the obtained results it is judged that more general non-invasive estimation of tissue mechanical properties is possible using the developed framework.
NASA Astrophysics Data System (ADS)
Yadollahi, Azadeh
Tracheal respiratory sounds analysis has been investigated as a non--invasive method to estimate respiratory flow and upper airway obstruction. However, the flow--sound relationship is highly variable among subjects which makes it challenging to estimate flow in general applications. Therefore, a robust model for acoustical flow estimation in a large group of individuals did not exist before. On the other hand, a major application of acoustical flow estimation is to detect flow limitations in patients with obstructive sleep apnea (OSA) during sleep. However, previously the flow--sound relationship was only investigated during wakefulness among healthy individuals. Therefore, it was necessary to examine the flow--sound relationship during sleep in OSA patients. This thesis takes the above challenges and offers innovative solutions. First, a modified linear flow--sound model was proposed to estimate respiratory flow from tracheal sounds. To remove the individual based calibration process, the statistical correlation between the model parameters and anthropometric features of 93 healthy volunteers was investigated. The results show that gender, height and smoking are the most significant factors that affect the model parameters. Hence, a general acoustical flow estimation model was proposed for people with similar height and gender. Second, flow--sound relationship during sleep and wakefulness was studied among 13 OSA patients. The results show that during sleep and wakefulness, flow--sound relation- ship follows a power law, but with different parameters. Therefore, for acoustical flow estimation during sleep, the model parameters should be extracted from sleep data to have small errors. The results confirm reliability of the acoustical flow estimation for investigating flow variations during both sleep and wakefulness. Finally, a new method for sleep apnea detection and monitoring was developed, which only requires recording the tracheal sounds and the blood's oxygen saturation level (SaO2) data. It automatically classifies the sound segments into breath, snore and noise. A weighted average of features extracted from sound segments and SaO2 signal was used to detect apnea and hypopnea events. The performance of the proposed approach was evaluated on the data of 66 patients. The results show high correlation (0.96, p < 0.0001) between the outcomes of our system and those of the polysomnography. Also, sensitivity and specificity of the proposed method in differentiating simple snorers from OSA patients were found to be more than 91%. These results are superior or comparable with the existing commercialized sleep apnea portable monitors.
Phuong, H N; Martin, O; de Boer, I J M; Ingvartsen, K L; Schmidely, Ph; Friggens, N C
2015-01-01
This study explored the ability of an existing lifetime nutrient partitioning model for simulating individual variability in genetic potentials of dairy cows. Generally, the model assumes a universal trajectory of dynamic partitioning of priority between life functions and genetic scaling parameters are then incorporated to simulate individual difference in performance. Data of 102 cows including 180 lactations of 3 breeds: Danish Red, Danish Holstein, and Jersey, which were completely independent from those used previously for model development, were used. Individual cow performance records through sequential lactations were used to derive genetic scaling parameters for each animal by calibrating the model to achieve best fit, cow by cow. The model was able to fit individual curves of body weight, and milk fat, milk protein, and milk lactose concentrations with a high degree of accuracy. Daily milk yield and dry matter intake were satisfactorily predicted in early and mid lactation, but underpredictions were found in late lactation. Breeds and parities did not significantly affect the prediction accuracy. The means of genetic scaling parameters between Danish Red and Danish Holstein were similar but significantly different from those of Jersey. The extent of correlations between the genetic scaling parameters was consistent with that reported in the literature. In conclusion, this model is of value as a tool to derive estimates of genetic potentials of milk yield, milk composition, body reserve usage, and growth for different genotypes of cow. Moreover, it can be used to separate genetic variability in performance between individual cows from environmental noise. The model enables simulation of the effects of a genetic selection strategy on lifetime efficiency of individual cows, which has a main advantage of including the rearing costs, and thus, can be used to explore the impact of future selection on animal performance and efficiency. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Ward, John; Sorrels, Ken; Coats, Jesse; Pourmoghaddam, Amir; Deleon, Carlos; Daigneault, Paige
2014-03-01
The purpose of this study was to pilot test our study procedures and estimate parameters for sample size calculations for a randomized controlled trial to determine if bilateral sacroiliac (SI) joint manipulation affects specific gait parameters in asymptomatic individuals with a leg length inequality (LLI). Twenty-one asymptomatic chiropractic students engaged in a baseline 90-second walking kinematic analysis using infrared Vicon® cameras. Following this, participants underwent a functional LLI test. Upon examination participants were classified as: left short leg, right short leg, or no short leg. Half of the participants in each short leg group were then randomized to receive bilateral corrective SI joint chiropractic manipulative therapy (CMT). All participants then underwent another 90-second gait analysis. Pre- versus post-intervention gait data were then analyzed within treatment groups by an individual who was blinded to participant group status. For the primary analysis, all p-values were corrected for multiple comparisons using the Bonferroni method. Within groups, no differences in measured gait parameters were statistically significant after correcting for multiple comparisons. The protocol of this study was acceptable to all subjects who were invited to participate. No participants refused randomization. Based on the data collected, we estimated that a larger main study would require 34 participants in each comparison group to detect a moderate effect size.
Optimum Selection Age for Wood Density in Loblolly Pine
D.P. Gwaze; K.J. Harding; R.C. Purnell; Floyd E. Brigwater
2002-01-01
Genetic and phenotypic parameters for core wood density of Pinus taeda L. were estimated for ages ranging from 5 to 25 years at two sites in southern United States. Heritability estimates on an individual-tree basis for core density were lower than expected (0.20-0.31). Age-age genetic correlations were higher than phenotypic correlations,...
Udevitz, Mark S.; El-Shaarawi, Abdel H.; Piegorsch, Walter W.
2002-01-01
Change-in-ratio (CIR) methods are used to estimate parameters for ecological populations subject to differential removals from population subclasses. Subclasses can be defined according to criteria such as sex, age, or size of individuals. Removals are generally in the form of closely monitored sport or commercial harvests. Estimation is based on observed changes in subclass proportions caused by the removals.
Udevitz, Mark S.
2014-01-01
Change-in-ratio (CIR) methods are used to estimate parameters for ecological populations subject to differential removals from population subclasses. Subclasses can be defined according to criteria such as sex, age, or size of individuals. Removals are generally in the form of closely monitored sport or commercial harvests. Estimation is based on observed changes in subclass proportions caused by the removals.
Nicolas, Xavier; Djebli, Nassim; Rauch, Clémence; Brunet, Aurélie; Hurbin, Fabrice; Martinez, Jean-Marie; Fabre, David
2018-05-03
Alirocumab, a human monoclonal antibody against proprotein convertase subtilisin/kexin type 9 (PCSK9), significantly lowers low-density lipoprotein cholesterol levels. This analysis aimed to develop and qualify a population pharmacokinetic/pharmacodynamic model for alirocumab based on pooled data obtained from 13 phase I/II/III clinical trials. From a dataset of 2799 individuals (14,346 low-density lipoprotein-cholesterol values), individual pharmacokinetic parameters from the population pharmacokinetic model presented in Part I of this series were used to estimate alirocumab concentrations. As a second step, we then developed the current population pharmacokinetic/pharmacodynamic model using an indirect response model with a Hill coefficient, parameterized with increasing low-density lipoprotein cholesterol elimination, to relate alirocumab concentrations to low-density lipoprotein cholesterol values. The population pharmacokinetic/pharmacodynamic model allowed the characterization of the pharmacokinetic/pharmacodynamic properties of alirocumab in the target population and estimation of individual low-density lipoprotein cholesterol levels and derived pharmacodynamic parameters (the maximum decrease in low-density lipoprotein cholesterol values from baseline and the difference between baseline low-density lipoprotein cholesterol and the pre-dose value before the next alirocumab dose). Significant parameter-covariate relationships were retained in the model, with a total of ten covariates (sex, age, weight, free baseline PCSK9, total time-varying PCSK9, concomitant statin administration, total baseline PCSK9, co-administration of high-dose statins, disease status) included in the final population pharmacokinetic/pharmacodynamic model to explain between-subject variability. Nevertheless, the high number of covariates included in the model did not have a clinically meaningful impact on model-derived pharmacodynamic parameters. This model successfully allowed the characterization of the population pharmacokinetic/pharmacodynamic properties of alirocumab in its target population and the estimation of individual low-density lipoprotein cholesterol levels.
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Porth, Ilga; Chen, Charles; El-Kassaby, Yousry A.
2016-01-01
The open-pollinated (OP) family testing combines the simplest known progeny evaluation and quantitative genetics analyses as candidates’ offspring are assumed to represent independent half-sib families. The accuracy of genetic parameter estimates is often questioned as the assumption of “half-sibling” in OP families may often be violated. We compared the pedigree- vs. marker-based genetic models by analysing 22-yr height and 30-yr wood density for 214 white spruce [Picea glauca (Moench) Voss] OP families represented by 1694 individuals growing on one site in Quebec, Canada. Assuming half-sibling, the pedigree-based model was limited to estimating the additive genetic variances which, in turn, were grossly overestimated as they were confounded by very minor dominance and major additive-by-additive epistatic genetic variances. In contrast, the implemented genomic pairwise realized relationship models allowed the disentanglement of additive from all nonadditive factors through genetic variance decomposition. The marker-based models produced more realistic narrow-sense heritability estimates and, for the first time, allowed estimating the dominance and epistatic genetic variances from OP testing. In addition, the genomic models showed better prediction accuracies compared to pedigree models and were able to predict individual breeding values for new individuals from untested families, which was not possible using the pedigree-based model. Clearly, the use of marker-based relationship approach is effective in estimating the quantitative genetic parameters of complex traits even under simple and shallow pedigree structure. PMID:26801647
NASA Technical Reports Server (NTRS)
Greenwood, Eric, II; Schmitz, Fredric H.
2010-01-01
A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.
The iCARE R Package allows researchers to quickly build models for absolute risk, and apply them to estimate an individual's risk of developing disease during a specifed time interval, based on a set of user defined input parameters.
Nichols, J.D.
2004-01-01
The EURING meetings and the scientists who have attended them have contributed substantially to the growth of knowledge in the field of estimating parameters of animal populations. The contributions of David R. Anderson to process modeling, parameter estimation and decision analysis are briefly reviewed. Metrics are considered for assessing individual contributions to a field of inquiry, and it is concluded that Anderson’s contributions have been substantial. Important characteristics of Anderson and his career are the ability to identify and focus on important topics, the premium placed on dissemination of new methods to prospective users, the ability to assemble teams of complementary researchers, and the innovation and vision that characterized so much of his work. The paper concludes with a list of interesting current research topics for consideration by EURING participants.
NASA Astrophysics Data System (ADS)
Hadas, E.; Jozkow, G.; Walicka, A.; Borkowski, A.
2018-05-01
The estimation of dendrometric parameters has become an important issue for agriculture planning and for the efficient management of orchards. Airborne Laser Scanning (ALS) data is widely used in forestry and many algorithms for automatic estimation of dendrometric parameters of individual forest trees were developed. Unfortunately, due to significant differences between forest and fruit trees, some contradictions exist against adopting the achievements of forestry science to agricultural studies indiscriminately. In this study we present the methodology to identify individual trees in apple orchard and estimate heights of individual trees, using high-density LiDAR data (3200 points/m2) obtained with Unmanned Aerial Vehicle (UAV) equipped with Velodyne HDL32-E sensor. The processing strategy combines the alpha-shape algorithm, principal component analysis (PCA) and detection of local minima. The alpha-shape algorithm is used to separate tree rows. In order to separate trees in a single row, we detect local minima on the canopy profile and slice polygons from alpha-shape results. We successfully separated 92 % of trees in the test area. 6 % of trees in orchard were not separated from each other and 2 % were sliced into two polygons. The RMSE of tree heights determined from the point clouds compared to field measurements was equal to 0.09 m, and the correlation coefficient was equal to 0.96. The results confirm the usefulness of LiDAR data from UAV platform in orchard inventory.
Spatial capture-recapture models allowing Markovian transience or dispersal
Royle, J. Andrew; Fuller, Angela K.; Sutherland, Chris
2016-01-01
Spatial capture–recapture (SCR) models are a relatively recent development in quantitative ecology, and they are becoming widely used to model density in studies of animal populations using camera traps, DNA sampling and other methods which produce spatially explicit individual encounter information. One of the core assumptions of SCR models is that individuals possess home ranges that are spatially stationary during the sampling period. For many species, this assumption is unlikely to be met and, even for species that are typically territorial, individuals may disperse or exhibit transience at some life stages. In this paper we first conduct a simulation study to evaluate the robustness of estimators of density under ordinary SCR models when dispersal or transience is present in the population. Then, using both simulated and real data, we demonstrate that such models can easily be described in the BUGS language providing a practical framework for their analysis, which allows us to evaluate movement dynamics of species using capture–recapture data. We find that while estimators of density are extremely robust, even to pathological levels of movement (e.g., complete transience), the estimator of the spatial scale parameter of the encounter probability model is confounded with the dispersal/transience scale parameter. Thus, use of ordinary SCR models to make inferences about density is feasible, but interpretation of SCR model parameters in relation to movement should be avoided. Instead, when movement dynamics are of interest, such dynamics should be parameterized explicitly in the model.
The use of subjective rating of exertion in Ergonomics.
Capodaglio, P
2002-01-01
In Ergonomics, the use of psychophysical methods for subjectively evaluating work tasks and determining acceptable loads has become more common. Daily activities at the work site are studied not only with physiological methods but also with perceptual estimation and production methods. The psychophysical methods are of special interest in field studies of short-term work tasks for which valid physiological measurements are difficult to obtain. The perceived exertion, difficulty and fatigue that a person experiences in a certain work situation is an important sign of a real or objective load. Measurement of the physical load with physiological parameters is not sufficient since it does not take into consideration the particular difficulty of the performance or the capacity of the individual. It is often difficult from technical and biomechanical analyses to understand the seriousness of a difficulty that a person experiences. Physiological determinations give important information, but they may be insufficient due to the technical problems in obtaining relevant but simple measurements for short-term activities or activities involving special movement patterns. Perceptual estimations using Borg's scales give important information because the severity of a task's difficulty depends on the individual doing the work. Observation is the most simple and used means to assess job demands. Other evaluations integrating observation are the followings: indirect estimation of energy expenditure based on prediction equations or direct measurement of oxygen consumption; measurements of forces, angles and biomechanical parameters; measurements of physiological and neurophysiological parameters during tasks. It is recommended that determinations of performances of occupational activities assess rating of perceived exertion and integrate these measurements of intensity levels with those of activity's type, duration and frequency. A better estimate of the degree of physical activity of individuals thus can be obtained.
NASA Astrophysics Data System (ADS)
Suzuki, Yôiti; Watanabe, Kanji; Iwaya, Yukio; Gyoba, Jiro; Takane, Shouichi
2005-04-01
Because the transfer functions governing subjective sound localization (HRTFs) show strong individuality, sound localization systems based on synthesis of HRTFs require suitable HRTFs for individual listeners. However, it is impractical to obtain HRTFs for all listeners based on measurements. Improving sound localization by adjusting non-individualized HRTFs to a specific listener based on that listener's anthropometry might be a practical method. This study first developed a new method to estimate interaural time differences (ITDs) using HRTFs. Then correlations between ITDs and anthropometric parameters were analyzed using the canonical correlation method. Results indicated that parameters relating to head size, and shoulder and ear positions are significant. Consequently, it was attempted to express ITDs based on listener's anthropometric data. In this process, the change of ITDs as a function of azimuth angle was parameterized as a sum of sine functions. Then the parameters were analyzed using multiple regression analysis, in which the anthropometric parameters were used as explanatory variables. The predicted or individualized ITDs were installed in the nonindividualized HRTFs to evaluate sound localization performance. Results showed that individualization of ITDs improved horizontal sound localization.
Assessing tiger population dynamics using photographic capture-recapture sampling
Karanth, K.U.; Nichols, J.D.; Kumar, N.S.; Hines, J.E.
2006-01-01
Although wide-ranging, elusive, large carnivore species, such as the tiger, are of scientific and conservation interest, rigorous inferences about their population dynamics are scarce because of methodological problems of sampling populations at the required spatial and temporal scales. We report the application of a rigorous, noninvasive method for assessing tiger population dynamics to test model-based predictions about population viability. We obtained photographic capture histories for 74 individual tigers during a nine-year study involving 5725 trap-nights of effort. These data were modeled under a likelihood-based, ?robust design? capture?recapture analytic framework. We explicitly modeled and estimated ecological parameters such as time-specific abundance, density, survival, recruitment, temporary emigration, and transience, using models that incorporated effects of factors such as individual heterogeneity, trap-response, and time on probabilities of photo-capturing tigers. The model estimated a random temporary emigration parameter of =K' =Y' 0.10 ? 0.069 (values are estimated mean ? SE). When scaled to an annual basis, tiger survival rates were estimated at S = 0.77 ? 0.051, and the estimated probability that a newly caught animal was a transient was = 0.18 ? 0.11. During the period when the sampled area was of constant size, the estimated population size Nt varied from 17 ? 1.7 to 31 ? 2.1 tigers, with a geometric mean rate of annual population change estimated as = 1.03 ? 0.020, representing a 3% annual increase. The estimated recruitment of new animals, Bt, varied from 0 ? 3.0 to 14 ? 2.9 tigers. Population density estimates, D, ranged from 7.33 ? 0.8 tigers/100 km2 to 21.73 ? 1.7 tigers/100 km2 during the study. Thus, despite substantial annual losses and temporal variation in recruitment, the tiger density remained at relatively high levels in Nagarahole. Our results are consistent with the hypothesis that protected wild tiger populations can remain healthy despite heavy mortalities because of their inherently high reproductive potential. The ability to model the entire photographic capture history data set and incorporate reduced-parameter models led to estimates of mean annual population change that were sufficiently precise to be useful. This efficient, noninvasive sampling approach can be used to rigorously investigate the population dynamics of tigers and other elusive, rare, wide-ranging animal species in which individuals can be identified from photographs or other means.
Assessing tiger population dynamics using photographic capture-recapture sampling.
Karanth, K Ullas; Nichols, James D; Kumar, N Samba; Hines, James E
2006-11-01
Although wide-ranging, elusive, large carnivore species, such as the tiger, are of scientific and conservation interest, rigorous inferences about their population dynamics are scarce because of methodological problems of sampling populations at the required spatial and temporal scales. We report the application of a rigorous, noninvasive method for assessing tiger population dynamics to test model-based predictions about population viability. We obtained photographic capture histories for 74 individual tigers during a nine-year study involving 5725 trap-nights of effort. These data were modeled under a likelihood-based, "robust design" capture-recapture analytic framework. We explicitly modeled and estimated ecological parameters such as time-specific abundance, density, survival, recruitment, temporary emigration, and transience, using models that incorporated effects of factors such as individual heterogeneity, trap-response, and time on probabilities of photo-capturing tigers. The model estimated a random temporary emigration parameter of gamma" = gamma' = 0.10 +/- 0.069 (values are estimated mean +/- SE). When scaled to an annual basis, tiger survival rates were estimated at S = 0.77 +/- 0.051, and the estimated probability that a newly caught animal was a transient was tau = 0.18 +/- 0.11. During the period when the sampled area was of constant size, the estimated population size N(t) varied from 17 +/- 1.7 to 31 +/- 2.1 tigers, with a geometric mean rate of annual population change estimated as lambda = 1.03 +/- 0.020, representing a 3% annual increase. The estimated recruitment of new animals, B(t), varied from 0 +/- 3.0 to 14 +/- 2.9 tigers. Population density estimates, D, ranged from 7.33 +/- 0.8 tigers/100 km2 to 21.73 +/- 1.7 tigers/100 km2 during the study. Thus, despite substantial annual losses and temporal variation in recruitment, the tiger density remained at relatively high levels in Nagarahole. Our results are consistent with the hypothesis that protected wild tiger populations can remain healthy despite heavy mortalities because of their inherently high reproductive potential. The ability to model the entire photographic capture history data set and incorporate reduced-parameter models led to estimates of mean annual population change that were sufficiently precise to be useful. This efficient, noninvasive sampling approach can be used to rigorously investigate the population dynamics of tigers and other elusive, rare, wide-ranging animal species in which individuals can be identified from photographs or other means.
Addressing data privacy in matched studies via virtual pooling.
Saha-Chaudhuri, P; Weinberg, C R
2017-09-07
Data confidentiality and shared use of research data are two desirable but sometimes conflicting goals in research with multi-center studies and distributed data. While ideal for straightforward analysis, confidentiality restrictions forbid creation of a single dataset that includes covariate information of all participants. Current approaches such as aggregate data sharing, distributed regression, meta-analysis and score-based methods can have important limitations. We propose a novel application of an existing epidemiologic tool, specimen pooling, to enable confidentiality-preserving analysis of data arising from a matched case-control, multi-center design. Instead of pooling specimens prior to assay, we apply the methodology to virtually pool (aggregate) covariates within nodes. Such virtual pooling retains most of the information used in an analysis with individual data and since individual participant data is not shared externally, within-node virtual pooling preserves data confidentiality. We show that aggregated covariate levels can be used in a conditional logistic regression model to estimate individual-level odds ratios of interest. The parameter estimates from the standard conditional logistic regression are compared to the estimates based on a conditional logistic regression model with aggregated data. The parameter estimates are shown to be similar to those without pooling and to have comparable standard errors and confidence interval coverage. Virtual data pooling can be used to maintain confidentiality of data from multi-center study and can be particularly useful in research with large-scale distributed data.
Robust estimation procedure in panel data model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shariff, Nurul Sima Mohamad; Hamzah, Nor Aishah
2014-06-19
The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependencemore » is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.« less
Uncertainty relation based on unbiased parameter estimations
NASA Astrophysics Data System (ADS)
Sun, Liang-Liang; Song, Yong-Shun; Qiao, Cong-Feng; Yu, Sixia; Chen, Zeng-Bing
2017-02-01
Heisenberg's uncertainty relation has been extensively studied in spirit of its well-known original form, in which the inaccuracy measures used exhibit some controversial properties and don't conform with quantum metrology, where the measurement precision is well defined in terms of estimation theory. In this paper, we treat the joint measurement of incompatible observables as a parameter estimation problem, i.e., estimating the parameters characterizing the statistics of the incompatible observables. Our crucial observation is that, in a sequential measurement scenario, the bias induced by the first unbiased measurement in the subsequent measurement can be eradicated by the information acquired, allowing one to extract unbiased information of the second measurement of an incompatible observable. In terms of Fisher information we propose a kind of information comparison measure and explore various types of trade-offs between the information gains and measurement precisions, which interpret the uncertainty relation as surplus variance trade-off over individual perfect measurements instead of a constraint on extracting complete information of incompatible observables.
Sample size calculation for stepped wedge and other longitudinal cluster randomised trials.
Hooper, Richard; Teerenstra, Steven; de Hoop, Esther; Eldridge, Sandra
2016-11-20
The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross-section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Maxine: A spreadsheet for estimating dose from chronic atmospheric radioactive releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, Tim; Bell, Evaleigh; Dixon, Kenneth
MAXINE is an EXCEL© spreadsheet, which is used to estimate dose to individuals for routine and accidental atmospheric releases of radioactive materials. MAXINE does not contain an atmospheric dispersion model, but rather doses are estimated using air and ground concentrations as input. Minimal input is required to run the program and site specific parameters are used when possible. Complete code description, verification of models, and user’s manual have been included.
Estimating Sleep from Multisensory Armband Measurements: Validity and Reliability in Teens
Roane, Brandy M.; Van Reen, Eliza; Hart, Chantelle N.; Wing, Rena; Carskadon, Mary A.
2015-01-01
SUMMARY Given the recognition that sleep may influence obesity risk, there is increasing interest in measuring sleep parameters within obesity studies. The goal of the current analyses was to determine whether the SenseWear® Pro3 Armband (armband), typically used to assess physical activity, is reliable at assessing sleep parameters. We compared the armband to the AMI Motionlogger® (actigraph), a validated activity monitor for sleep assessment and to polysomnography (PSG), the gold standard for assessing sleep. Participants were twenty adolescents (mean age=15.5 years) with a mean BMI %tile of 63.7. All participants wore the armband and actigraph on their non-dominant arm while in-lab during a nocturnal PSG recording (600 minutes). Epoch-by-epoch sleep/wake data and concordance of sleep parameters were examined. No significant sleep parameter differences were found between the armband and PSG; the actigraph tended to overestimate sleep and underestimate wake compared to PSG. Both devices showed high sleep sensitivity, but lower wake detection rates. Bland-Altman plots showed large individual differences in armband sleep parameter concordance rates. The armband did well estimating sleep overall with group results more similar to PSG than the actigraph; however, the armband was less accurate at an individual level than the actigraph. PMID:26126746
Estimating sleep from multisensory armband measurements: validity and reliability in teens.
Roane, Brandy M; Van Reen, Eliza; Hart, Chantelle N; Wing, Rena; Carskadon, Mary A
2015-12-01
Given the recognition that sleep may influence obesity risk, there is increasing interest in measuring sleep parameters within obesity studies. The goal of the current analyses was to determine whether the SenseWear(®) Pro3 Armband (armband), typically used to assess physical activity, is reliable at assessing sleep parameters. The armband was compared with the AMI Motionlogger(®) (actigraph), a validated activity monitor for sleep assessment, and with polysomnography, the gold standard for assessing sleep. Participants were 20 adolescents (mean age = 15.5 years) with a mean body mass index percentile of 63.7. All participants wore the armband and actigraph on their non-dominant arm while in-lab during a nocturnal polysomnographic recording (600 min). Epoch-by-epoch sleep/wake data and concordance of sleep parameters were examined. No significant sleep parameter differences were found between the armband and polysomnography; the actigraph tended to overestimate sleep and underestimate wake compared with polysomnography. Both devices showed high sleep sensitivity, but lower wake detection rates. Bland-Altman plots showed large individual differences in armband sleep parameter concordance rates. The armband did well estimating sleep overall, with group results more similar to polysomnography than the actigraph; however, the armband was less accurate at an individual level than the actigraph. © 2015 European Sleep Research Society.
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy
1993-01-01
Climate changes traditionally have been detected from long series of observations and long after they happened. The 'inverse sequential' monitoring procedure is designed to detect changes as soon as they occur. Frequency distribution parameters are estimated both from the most recent existing set of observations and from the same set augmented by 1,2,...j new observations. Individual-value probability products ('likelihoods') are then calculated which yield probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set and vice versa. A parameter change is signaled when these probabilities (or a more convenient and robust compound 'no change' probability) show a progressive decrease. New parameters are then estimated from the new observations alone to restart the procedure. The detailed algebra is developed and tested for Gaussian means and variances, Poisson and chi-square means, and linear or exponential trends; a comprehensive and interactive Fortran program is provided in the appendix.
Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng
2014-12-10
Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
NASA Astrophysics Data System (ADS)
Green, C. T.; Liao, L.; Nolan, B. T.; Juckem, P. F.; Ransom, K.; Harter, T.
2017-12-01
Process-based modeling of regional NO3- fluxes to groundwater is critical for understanding and managing water quality. Measurements of atmospheric tracers of groundwater age and dissolved-gas indicators of denitrification progress have potential to improve estimates of NO3- reactive transport processes. This presentation introduces a regionalized version of a vertical flux method (VFM) that uses simple mathematical estimates of advective-dispersive reactive transport with regularization procedures to calibrate estimated tracer concentrations to observed equivalents. The calibrated VFM provides estimates of chemical, hydrologic and reaction parameters (source concentration time series, recharge, effective porosity, dispersivity, reaction rate coefficients) and derived values (e.g. mean unsaturated zone travel time, eventual depth of the NO3- front) for individual wells. Statistical learning methods are used to extrapolate parameters and predictions from wells to continuous areas. The regional VFM was applied to 473 well samples in central-eastern Wisconsin. Chemical measurements included O2, NO3-, N2 from denitrification, and atmospheric tracers of groundwater age including carbon-14, chlorofluorocarbons, tritium, and triogiogenic helium. VFM results were consistent with observed chemistry, and calibrated parameters were in-line with independent estimates. Results indicated that (1) unsaturated zone travel times were a substantial portion of the transit time to wells and streams (2) fractions of N leached to groundwater have changed over time, with increasing fractions from manure and decreasing fractions from fertilizer, and (3) under current practices and conditions, 60% of the shallow aquifer will eventually be affected by NO3- contamination. Based on GIS coverages of variables related to soils, land use and hydrology, the VFM results at individual wells were extrapolated regionally using boosted regression trees, a statistical learning approach, that related the GIS variables to the VFM parameters and predictions. Future work will explore applications at larger scales with direct integration of the statistical prediction model with the mechanistic VFM.
A Scalar Product Model for the Multidimensional Scaling of Choice
ERIC Educational Resources Information Center
Bechtel, Gordon G.; And Others
1971-01-01
Contains a solution for the multidimensional scaling of pairwise choice when individuals are represented as dimensional weights. The analysis supplies an exact least squares solution and estimates of group unscalability parameters. (DG)
Yadollahi, Azadeh; Montazeri, Aman; Azarbarzin, Ali; Moussavi, Zahra
2013-03-01
Tracheal respiratory sound analysis is a simple and non-invasive way to study the pathophysiology of the upper airway and has recently been used for acoustic estimation of respiratory flow and sleep apnea diagnosis. However in none of the previous studies was the respiratory flow-sound relationship studied in people with obstructive sleep apnea (OSA), nor during sleep. In this study, we recorded tracheal sound, respiratory flow, and head position from eight non-OSA and 10 OSA individuals during sleep and wakefulness. We compared the flow-sound relationship and variations in model parameters from wakefulness to sleep within and between the two groups. The results show that during both wakefulness and sleep, flow-sound relationship follows a power law but with different parameters. Furthermore, the variations in model parameters may be representative of the OSA pathology. The other objective of this study was to examine the accuracy of respiratory flow estimation algorithms during sleep: we investigated two approaches for calibrating the model parameters using the known data recorded during either wakefulness or sleep. The results show that the acoustical respiratory flow estimation parameters change from wakefulness to sleep. Therefore, if the model is calibrated using wakefulness data, although the estimated respiratory flow follows the relative variations of the real flow, the quantitative flow estimation error would be high during sleep. On the other hand, when the calibration parameters are extracted from tracheal sound and respiratory flow recordings during sleep, the respiratory flow estimation error is less than 10%.
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Näsholm, S. P.; Ruigrok, E.; Kværna, T.
2018-04-01
Seismic arrays enhance signal detection and parameter estimation by exploiting the time-delays between arriving signals on sensors at nearby locations. Parameter estimates can suffer due to both signal incoherence, with diminished waveform similarity between sensors, and aberration, with time-delays between coherent waveforms poorly represented by the wave-front model. Sensor-to-sensor correlation approaches to parameter estimation have an advantage over direct beamforming approaches in that individual sensor-pairs can be omitted without necessarily omitting entirely the data from each of the sensors involved. Specifically, we can omit correlations between sensors for which signal coherence in an optimal frequency band is anticipated to be poor or for which anomalous time-delays are anticipated. In practice, this usually means omitting correlations between more distant sensors. We present examples from International Monitoring System seismic arrays with poor parameter estimates resulting when classical f-k analysis is performed over the full array aperture. We demonstrate improved estimates and slowness grid displays using correlation beamforming restricted to correlations between sufficiently closely spaced sensors. This limited sensor-pair correlation (LSPC) approach has lower slowness resolution than would ideally be obtained by considering all sensor-pairs. However, this ideal estimate may be unattainable due to incoherence and/or aberration and the LSPC estimate can often exploit all channels, with the associated noise-suppression, while mitigating the complications arising from correlations between very distant sensors. The greatest need for the method is for short-period signals on large aperture arrays although we also demonstrate significant improvement for secondary regional phases on a small aperture array. LSPC can also provide a robust and flexible approach to parameter estimation on three-component seismic arrays.
[Estimating survival of thrushes: modeling capture-recapture probabilities].
Burskiî, O V
2011-01-01
The stochastic modeling technique serves as a way to correctly separate "return rate" of marked animals into survival rate (phi) and capture probability (p). The method can readily be used with the program MARK freely distributed through Internet (Cooch, White, 2009). Input data for the program consist of "capture histories" of marked animals--strings of units and zeros indicating presence or absence of the individual among captures (or sightings) along the set of consequent recapture occasions (e.g., years). Probability of any history is a product of binomial probabilities phi, p or their complements (1 - phi) and (1 - p) for each year of observation over the individual. Assigning certain values to parameters phi and p, one can predict the composition of all individual histories in the sample and assess the likelihood of the prediction. The survival parameters for different occasions and cohorts of individuals can be set either equal or different, as well as recapture parameters can be set in different ways. There is a possibility to constraint the parameters, according to the hypothesis being tested, in the form of a specific model. Within the specified constraints, the program searches for parameter values that describe the observed composition of histories with the maximum likelihood. It computes the parameter estimates along with confidence limits and the overall model likelihood. There is a set of tools for testing the model goodness-of-fit under assumption of equality of survival rates among individuals and independence of their fates. Other tools offer a proper selection among a possible variety of models, providing the best parity between details and precision in describing reality. The method was applied to 20-yr recapture and resighting data series on 4 thrush species (genera Turdus, Zoothera) breeding in the Yenisei River floodplain within the middle taiga subzone. The capture probabilities were quite independent of observational efforts fluctuations while differing significantly between the species and sexes. The estimates of adult survival rate, obtained for the Siberian migratory populations, were lower than those for sedentary populations from both the tropics and intermediate latitudes with marine climate (data by Ricklefs, 1997). Two factors, the average temperature influencing birds during their annual movements, and climatic seasonality (temperature difference between summer and winter) in the breeding area, fit the latitudinal pattern of survival most closely (R2 = 0.90). Final survival of migrants reflects an adaptive life history compromise for use of superabundant resources in breeding area at the cost of avoidance of severe winter conditions.
Mateus, L A de F; Estupiñán, G M B
2002-02-01
Fork length measurements of individuals of Brycon microlepis landed and commercialized at the Porto Market in Cuiabá, MT, from May-October 1996 to May-October 1997 were used to estimate growth and mortality parameters for this species. The average estimated populational parameters were: L infinity = 705 mm, k = 0.275 year-1, C = 0.775, WP = 0.465, Lc = 164 mm, M = 0.585 year-1, Z = 0.822 year-1, with F = 0.237 year-1. Yield per recruit analysis suggests that the stock is not yet overexploited.
Multivariate meta-analysis with an increasing number of parameters
Boca, Simina M.; Pfeiffer, Ruth M.; Sampson, Joshua N.
2017-01-01
Summary Meta-analysis can average estimates of multiple parameters, such as a treatment’s effect on multiple outcomes, across studies. Univariate meta-analysis (UVMA) considers each parameter individually, while multivariate meta-analysis (MVMA) considers the parameters jointly and accounts for the correlation between their estimates. The performance of MVMA and UVMA has been extensively compared in scenarios with two parameters. Our objective is to compare the performance of MVMA and UVMA as the number of parameters, p, increases. Specifically, we show that (i) for fixed-effect meta-analysis, the benefit from using MVMA can substantially increase as p increases; (ii) for random effects meta-analysis, the benefit from MVMA can increase as p increases, but the potential improvement is modest in the presence of high between-study variability and the actual improvement is further reduced by the need to estimate an increasingly large between study covariance matrix; and (iii) when there is little to no between study variability, the loss of efficiency due to choosing random effects MVMA over fixed-effect MVMA increases as p increases. We demonstrate these three features through theory, simulation, and a meta-analysis of risk factors for Non-Hodgkin Lymphoma. PMID:28195655
The Relationship Between School Holidays and Transmission of Influenza in England and Wales.
Jackson, Charlotte; Vynnycky, Emilia; Mangtani, Punam
2016-11-01
School closure is often considered as an influenza control measure, but its effects on transmission are poorly understood. We used 2 approaches to estimate how school holidays affect the contact parameter (the per capita rate of contact sufficient for infection transmission) for influenza using primary care data from England and Wales (1967-2000). Firstly, we fitted an age-structured susceptible-infectious-recovered model to each year's data to estimate the proportional change in the contact parameter during school holidays as compared with termtime. Secondly, we calculated the percentage difference in the contact parameter between holidays and termtime from weekly values of the contact parameter, estimated directly from simple mass-action models. Estimates were combined using random-effects meta-analysis, where appropriate. From fitting to the data, the difference in the contact parameter among children aged 5-14 years during holidays as compared with termtime ranged from a 36% reduction to a 17% increase; estimates were too heterogeneous for meta-analysis. Based on the simple mass-action model, the contact parameter was 17% (95% confidence interval: 10, 25) lower during holidays than during termtime. Results were robust to the assumed proportions of infections that were reported and individuals who were susceptible when the influenza season started. We conclude that school closure may reduce transmission during influenza outbreaks. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Socioeconomic implications of donation distributions
NASA Astrophysics Data System (ADS)
Wu, Yajing; Guo, Jinzhong; Chen, Qinghua; Wang, Yougui
2011-11-01
Individual donation depends on personal wealth and individual willingness to donate. On the basis of a donation model proposed in our previous study, a simplified version of an individual donation model is derived by relaxing the restrictions of the maximum wealth in the economy. Thus, the whole distribution is determined by only two parameters. One of them relates to the exponent of the distribution of society wealth and the other refers to the donation amount of the kindest poorest person. The parameters reflect the degree of wealth inequality and the charitable enthusiasm of society, respectively. Using actual donation data, we develop a specific parameter estimation method combining linear regression and the Kolmogorov-Smirnov (KS) statistic to get the value of two socioeconomic indicators. Applications to Chinese individual donations in response to the 2004 Indian Ocean tsunami and the 2008 Wenchuan earthquake indicate a rising inequality in social wealth distribution in China. Also, more charitable enthusiasm is observed in the response to the 2008 Wenchuan earthquake.
Ronald E. McRoberts
2005-01-01
Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...
Smart, Jonathan J.; Chin, Andrew; Baje, Leontine; Green, Madeline E.; Appleyard, Sharon A.; Tobin, Andrew J.; Simpfendorfer, Colin A.; White, William T.
2016-01-01
Fisheries observer programs are used around the world to collect crucial information and samples that inform fisheries management. However, observer error may misidentify similar-looking shark species. This raises questions about the level of error that species misidentifications could introduce to estimates of species’ life history parameters. This study addressed these questions using the Grey Reef Shark Carcharhinus amblyrhynchos as a case study. Observer misidentification rates were quantified by validating species identifications using diagnostic photographs taken on board supplemented with DNA barcoding. Length-at-age and maturity ogive analyses were then estimated and compared with and without the misidentified individuals. Vertebrae were retained from a total of 155 sharks identified by observers as C. amblyrhynchos. However, 22 (14%) of these were sharks were misidentified by the observers and were subsequently re-identified based on photographs and/or DNA barcoding. Of the 22 individuals misidentified as C. amblyrhynchos, 16 (73%) were detected using photographs and a further 6 via genetic validation. If misidentified individuals had been included, substantial error would have been introduced to both the length-at-age and the maturity estimates. Thus validating the species identification, increased the accuracy of estimated life history parameters for C. amblyrhynchos. From the corrected sample a multi-model inference approach was used to estimate growth for C. amblyrhynchos using three candidate models. The model averaged length-at-age parameters for C. amblyrhynchos with the sexes combined were L¯∞ = 159 cm TL and L¯0 = 72 cm TL. Females mature at a greater length (l50 = 136 cm TL) and older age (A50 = 9.1 years) than males (l50 = 123 cm TL; A50 = 5.9 years). The inclusion of techniques to reduce misidentification in observer programs will improve the results of life history studies and ultimately improve management through the use of more accurate data for assessments. PMID:27058734
Smart, Jonathan J; Chin, Andrew; Baje, Leontine; Green, Madeline E; Appleyard, Sharon A; Tobin, Andrew J; Simpfendorfer, Colin A; White, William T
2016-01-01
Fisheries observer programs are used around the world to collect crucial information and samples that inform fisheries management. However, observer error may misidentify similar-looking shark species. This raises questions about the level of error that species misidentifications could introduce to estimates of species' life history parameters. This study addressed these questions using the Grey Reef Shark Carcharhinus amblyrhynchos as a case study. Observer misidentification rates were quantified by validating species identifications using diagnostic photographs taken on board supplemented with DNA barcoding. Length-at-age and maturity ogive analyses were then estimated and compared with and without the misidentified individuals. Vertebrae were retained from a total of 155 sharks identified by observers as C. amblyrhynchos. However, 22 (14%) of these were sharks were misidentified by the observers and were subsequently re-identified based on photographs and/or DNA barcoding. Of the 22 individuals misidentified as C. amblyrhynchos, 16 (73%) were detected using photographs and a further 6 via genetic validation. If misidentified individuals had been included, substantial error would have been introduced to both the length-at-age and the maturity estimates. Thus validating the species identification, increased the accuracy of estimated life history parameters for C. amblyrhynchos. From the corrected sample a multi-model inference approach was used to estimate growth for C. amblyrhynchos using three candidate models. The model averaged length-at-age parameters for C. amblyrhynchos with the sexes combined were L∞ = 159 cm TL and L0 = 72 cm TL. Females mature at a greater length (l50 = 136 cm TL) and older age (A50 = 9.1 years) than males (l50 = 123 cm TL; A50 = 5.9 years). The inclusion of techniques to reduce misidentification in observer programs will improve the results of life history studies and ultimately improve management through the use of more accurate data for assessments.
Adaptive and Personalized Plasma Insulin Concentration Estimation for Artificial Pancreas Systems.
Hajizadeh, Iman; Rashid, Mudassir; Samadi, Sediqeh; Feng, Jianyuan; Sevil, Mert; Hobbs, Nicole; Lazaro, Caterina; Maloney, Zacharie; Brandt, Rachel; Yu, Xia; Turksoy, Kamuran; Littlejohn, Elizabeth; Cengiz, Eda; Cinar, Ali
2018-05-01
The artificial pancreas (AP) system, a technology that automatically administers exogenous insulin in people with type 1 diabetes mellitus (T1DM) to regulate their blood glucose concentrations, necessitates the estimation of the amount of active insulin already present in the body to avoid overdosing. An adaptive and personalized plasma insulin concentration (PIC) estimator is designed in this work to accurately quantify the insulin present in the bloodstream. The proposed PIC estimation approach incorporates Hovorka's glucose-insulin model with the unscented Kalman filtering algorithm. Methods for the personalized initialization of the time-varying model parameters to individual patients for improved estimator convergence are developed. Data from 20 three-days-long closed-loop clinical experiments conducted involving subjects with T1DM are used to evaluate the proposed PIC estimation approach. The proposed methods are applied to the clinical data containing significant disturbances, such as unannounced meals and exercise, and the results demonstrate the accurate real-time estimation of the PIC with the root mean square error of 7.15 and 9.25 mU/L for the optimization-based fitted parameters and partial least squares regression-based testing parameters, respectively. The accurate real-time estimation of PIC will benefit the AP systems by preventing overdelivery of insulin when significant insulin is present in the bloodstream.
Inverse sampling regression for pooled data.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Eskridge, Kent; Crossa, José
2017-06-01
Because pools are tested instead of individuals in group testing, this technique is helpful for estimating prevalence in a population or for classifying a large number of individuals into two groups at a low cost. For this reason, group testing is a well-known means of saving costs and producing precise estimates. In this paper, we developed a mixed-effect group testing regression that is useful when the data-collecting process is performed using inverse sampling. This model allows including covariate information at the individual level to incorporate heterogeneity among individuals and identify which covariates are associated with positive individuals. We present an approach to fit this model using maximum likelihood and we performed a simulation study to evaluate the quality of the estimates. Based on the simulation study, we found that the proposed regression method for inverse sampling with group testing produces parameter estimates with low bias when the pre-specified number of positive pools (r) to stop the sampling process is at least 10 and the number of clusters in the sample is also at least 10. We performed an application with real data and we provide an NLMIXED code that researchers can use to implement this method.
NASA Astrophysics Data System (ADS)
Zhao, Y.; Hu, Q.
2017-09-01
Continuous development of urban road traffic system requests higher standards of road ecological environment. Ecological benefits of street trees are getting more attention. Carbon sequestration of street trees refers to the carbon stocks of street trees, which can be a measurement for ecological benefits of street trees. Estimating carbon sequestration in a traditional way is costly and inefficient. In order to solve above problems, a carbon sequestration estimation approach for street trees based on 3D point cloud from vehicle-borne laser scanning system is proposed in this paper. The method can measure the geometric parameters of a street tree, including tree height, crown width, diameter at breast height (DBH), by processing and analyzing point cloud data of an individual tree. Four Chinese scholartree trees and four camphor trees are selected for experiment. The root mean square error (RMSE) of tree height is 0.11m for Chinese scholartree and 0.02m for camphor. Crown widths in X direction and Y direction, as well as the average crown width are calculated. And the RMSE of average crown width is 0.22m for Chinese scholartree and 0.10m for camphor. The last calculated parameter is DBH, the RMSE of DBH is 0.5cm for both Chinese scholartree and camphor. Combining the measured geometric parameters and an appropriate carbon sequestration calculation model, the individual tree's carbon sequestration will be estimated. The proposed method can help enlarge application range of vehicle-borne laser point cloud data, improve the efficiency of estimating carbon sequestration, construct urban ecological environment and manage landscape.
NASA Astrophysics Data System (ADS)
Hwang, Jiwon; Choi, Yong-Sang; Kim, WonMoo; Su, Hui; Jiang, Jonathan H.
2018-01-01
The high-latitude climate system contains complicated, but largely veiled physical feedback processes. Climate predictions remain uncertain, especially for the Northern High Latitudes (NHL; north of 60°N), and observational constraint on climate modeling is vital. This study estimates local radiative feedbacks for NHL based on the CERES/Terra satellite observations during March 2000-November 2014. The local shortwave (SW) and longwave (LW) radiative feedback parameters are calculated from linear regression of radiative fluxes at the top of the atmosphere on surface air temperatures. These parameters are estimated by the de-seasonalization and 12-month moving average of the radiative fluxes over NHL. The estimated magnitudes of the SW and the LW radiative feedbacks in NHL are 1.88 ± 0.73 and 2.38 ± 0.59 W m-2 K-1, respectively. The parameters are further decomposed into individual feedback components associated with surface albedo, water vapor, lapse rate, and clouds, as a product of the change in climate variables from ERA-Interim reanalysis estimates and their pre-calculated radiative kernels. The results reveal the significant role of clouds in reducing the surface albedo feedback (1.13 ± 0.44 W m-2 K-1 in the cloud-free condition, and 0.49 ± 0.30 W m-2 K-1 in the all-sky condition), while the lapse rate feedback is predominant in LW radiation (1.33 ± 0.18 W m-2 K-1). However, a large portion of the local SW and LW radiative feedbacks were not simply explained by the sum of these individual feedbacks.
Development of a highly automated system for the remote evaluation of individual tree parameters
Richard Pollock
2000-01-01
A highly-automated procedure for remotely estimating individual tree location, crown diameter, species class, and height has been developed. This procedure will involve the use of a multimodal airborne sensing system that consists of a digital frame camera, a scanning laser rangefinder, and a position and orientation measurement system. Data from the multimodal sensing...
Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)
NASA Technical Reports Server (NTRS)
Greenwood, Eric
2011-01-01
A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.
Inter-Individual Variability in High-Throughput Risk ...
We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have little or no existing TK data. Chemicals are prioritized based on model estimates of hazard and exposure, to decide which chemicals should be first in line for further study. Hazard may be estimated with in vitro HT screening assays, e.g., U.S. EPA’s ToxCast program. Bioactive ToxCast concentrations can be extrapolated to doses that produce equivalent concentrations in body tissues using a reverse TK approach in which generic TK models are parameterized with 1) chemical-specific parameters derived from in vitro measurements and predicted from chemical structure; and 2) with physiological parameters for a virtual population. Here we draw physiological parameters from realistic estimates of distributions of demographic and anthropometric quantities in the modern U.S. population, based on the most recent CDC NHANES data. A Monte Carlo approach, accounting for the correlation structure in physiological parameters, is used to estimate ToxCast equivalent doses for the most sensitive portion of the population. To quantify risk, ToxCast equivalent doses are compared to estimates of exposure rates based on Bayesian inferences drawn from NHANES urinary analyte biomonitoring data. The inclusion
Antoch, Marina P; Wrobel, Michelle; Kuropatwinski, Karen K; Gitlin, Ilya; Leonova, Katerina I; Toshkov, Ilia; Gleiberman, Anatoli S; Hutson, Alan D; Chernova, Olga B; Gudkov, Andrei V
2017-03-19
The development of healthspan-extending pharmaceuticals requires quantitative estimation of age-related progressive physiological decline. In humans, individual health status can be quantitatively assessed by means of a frailty index (FI), a parameter which reflects the scale of accumulation of age-related deficits. However, adaptation of this methodology to animal models is a challenging task since it includes multiple subjective parameters. Here we report a development of a quantitative non-invasive procedure to estimate biological age of an individual animal by creating physiological frailty index (PFI). We demonstrated the dynamics of PFI increase during chronological aging of male and female NIH Swiss mice. We also demonstrated acceleration of growth of PFI in animals placed on a high fat diet, reflecting aging acceleration by obesity and provide a tool for its quantitative assessment. Additionally, we showed that PFI could reveal anti-aging effect of mTOR inhibitor rapatar (bioavailable formulation of rapamycin) prior to registration of its effects on longevity. PFI revealed substantial sex-related differences in normal chronological aging and in the efficacy of detrimental (high fat diet) or beneficial (rapatar) aging modulatory factors. Together, these data introduce PFI as a reliable, non-invasive, quantitative tool suitable for testing potential anti-aging pharmaceuticals in pre-clinical studies.
Lichstein, Jeremy W; Dushoff, Jonathan; Ogle, Kiona; Chen, Anping; Purves, Drew W; Caspersen, John P; Pacala, Stephen W
2010-04-01
Geographically extensive forest inventories, such as the USDA Forest Service's Forest Inventory and Analysis (FIA) program, contain millions of individual tree growth and mortality records that could be used to develop broad-scale models of forest dynamics. A limitation of inventory data, however, is that individual-level measurements of light (L) and other environmental factors are typically absent. Thus, inventory data alone cannot be used to parameterize mechanistic models of forest dynamics in which individual performance depends on light, water, nutrients, etc. To overcome this limitation, we developed methods to estimate species-specific parameters (thetaG) relating sapling growth (G) to L using data sets in which G, but not L, is observed for each sapling. Our approach involves: (1) using calibration data that we collected in both eastern and western North America to quantify the probability that saplings receive different amounts of light, conditional on covariates x that can be obtained from inventory data (e.g., sapling crown class and neighborhood crowding); and (2) combining these probability distributions with observed G and x to estimate thetaG using Bayesian computational methods. Here, we present a test case using a data set in which G, L, and x were observed for saplings of nine species. This test data set allowed us to compare estimates of thetaG obtained from the standard approach (where G and L are observed for each sapling) to our method (where G and x, but not L, are observed). For all species, estimates of thetaG obtained from analyses with and without observed L were similar. This suggests that our approach should be useful for estimating light-dependent growth functions from inventory data that lack direct measurements of L. Our approach could be extended to estimate parameters relating sapling mortality to L from inventory data, as well as to deal with uncertainty in other resources (e.g., water or nutrients) or environmental factors (e.g., temperature).
NASA Astrophysics Data System (ADS)
Li, Xia; Welch, E. Brian; Arlinghaus, Lori R.; Bapsi Chakravarthy, A.; Xu, Lei; Farley, Jaime; Loveless, Mary E.; Mayer, Ingrid A.; Kelley, Mark C.; Meszoely, Ingrid M.; Means-Powell, Julie A.; Abramson, Vandana G.; Grau, Ana M.; Gore, John C.; Yankeelov, Thomas E.
2011-09-01
Quantitative analysis of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data requires the accurate determination of the arterial input function (AIF). A novel method for obtaining the AIF is presented here and pharmacokinetic parameters derived from individual and population-based AIFs are then compared. A Philips 3.0 T Achieva MR scanner was used to obtain 20 DCE-MRI data sets from ten breast cancer patients prior to and after one cycle of chemotherapy. Using a semi-automated method to estimate the AIF from the axillary artery, we obtain the AIF for each patient, AIFind, and compute a population-averaged AIF, AIFpop. The extended standard model is used to estimate the physiological parameters using the two types of AIFs. The mean concordance correlation coefficient (CCC) for the AIFs segmented manually and by the proposed AIF tracking approach is 0.96, indicating accurate and automatic tracking of an AIF in DCE-MRI data of the breast is possible. Regarding the kinetic parameters, the CCC values for Ktrans, vp and ve as estimated by AIFind and AIFpop are 0.65, 0.74 and 0.31, respectively, based on the region of interest analysis. The average CCC values for the voxel-by-voxel analysis are 0.76, 0.84 and 0.68 for Ktrans, vp and ve, respectively. This work indicates that Ktrans and vp show good agreement between AIFpop and AIFind while there is a weak agreement on ve.
Hu, Xiao Hua; Sun, X.; Hector, Jr., L. G.; ...
2017-04-21
Here, microstructure-based constitutive models for multiphase steels require accurate constitutive properties of the individual phases for component forming and performance simulations. We address this requirement with a combined experimental/theoretical methodology which determines the critical resolved shear stresses and hardening parameters of the constituent phases in QP980, a TRIP assisted steel subject to a two-step quenching and partitioning heat treatment. High energy X-Ray diffraction (HEXRD) from a synchrotron source provided the average lattice strains of the ferrite, martensite, and austenite phases from the measured volume during in situ tensile deformation. The HEXRD data was then input to a computationally efficient, elastic-plasticmore » self-consistent (EPSC) crystal plasticity model which estimated the constitutive parameters of different slip systems for the three phases via a trial-and-error approach. The EPSC-estimated parameters are then input to a finite element crystal plasticity (CPFE) model representing the QP980 tensile sample. The predicted lattice strains and global stress versus strain curves are found to be 8% lower that the EPSC model predicted values and from the HEXRD measurements, respectively. This discrepancy, which is attributed to the stiff secant assumption in the EPSC formulation, is resolved with a second step in which CPFE is used to iteratively refine the EPSC-estimated parameters. Remarkably close agreement is obtained between the theoretically-predicted and experimentally derived flow curve for the QP980 material.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, X. H.; Sun, X.; Hector, L. G.
2017-06-01
Microstructure-based constitutive models for multiphase steels require accurate constitutive properties of the individual phases for component forming and performance simulations. We address this requirement with a combined experimental/theoretical methodology which determines the critical resolved shear stresses and hardening parameters of the constituent phases in QP980, a TRIP assisted steel subject to a two-step quenching and partitioning heat treatment. High energy X-Ray diffraction (HEXRD) from a synchrotron source provided the average lattice strains of the ferrite, martensite, and austenite phases from the measured volume during in situ tensile deformation. The HEXRD data was then input to a computationally efficient, elastic-plastic self-consistentmore » (EPSC) crystal plasticity model which estimated the constitutive parameters of different slip systems for the three phases via a trial-and-error approach. The EPSC-estimated parameters are then input to a finite element crystal plasticity (CPFE) model representing the QP980 tensile sample. The predicted lattice strains and global stress versus strain curves are found to be 8% lower that the EPSC model predicted values and from the HEXRD measurements, respectively. This discrepancy, which is attributed to the stiff secant assumption in the EPSC formulation, is resolved with a second step in which CPFE is used to iteratively refine the EPSC-estimated parameters. Remarkably close agreement is obtained between the theoretically-predicted and experimentally derived flow curve for the QP980 material.« less
Waniewski, Jacek; Antosiewicz, Stefan; Baczynski, Daniel; Poleszczuk, Jan; Pietribiasi, Mauro; Lindholm, Bengt; Wankowicz, Zofia
2016-01-01
During peritoneal dialysis (PD), the peritoneal membrane undergoes ageing processes that affect its function. Here we analyzed associations of patient age and dialysis vintage with parameters of peritoneal transport of fluid and solutes, directly measured and estimated based on the pore model, for individual patients. Thirty-three patients (15 females; age 60 (21–87) years; median time on PD 19 (3–100) months) underwent sequential peritoneal equilibration test. Dialysis vintage and patient age did not correlate. Estimation of parameters of the two-pore model of peritoneal transport was performed. The estimated fluid transport parameters, including hydraulic permeability (LpS), fraction of ultrasmall pores (α u), osmotic conductance for glucose (OCG), and peritoneal absorption, were generally independent of solute transport parameters (diffusive mass transport parameters). Fluid transport parameters correlated whereas transport parameters for small solutes and proteins did not correlate with dialysis vintage and patient age. Although LpS and OCG were lower for older patients and those with long dialysis vintage, αu was higher. Thus, fluid transport parameters—rather than solute transport parameters—are linked to dialysis vintage and patient age and should therefore be included when monitoring processes linked to ageing of the peritoneal membrane. PMID:26989432
Effects of sampling close relatives on some elementary population genetics analyses.
Wang, Jinliang
2018-01-01
Many molecular ecology analyses assume the genotyped individuals are sampled at random from a population and thus are representative of the population. Realistically, however, a sample may contain excessive close relatives (ECR) because, for example, localized juveniles are drawn from fecund species. Our knowledge is limited about how ECR affect the routinely conducted elementary genetics analyses, and how ECR are best dealt with to yield unbiased and accurate parameter estimates. This study quantifies the effects of ECR on some popular population genetics analyses of marker data, including the estimation of allele frequencies, F-statistics, expected heterozygosity (H e ), effective and observed numbers of alleles, and the tests of Hardy-Weinberg equilibrium (HWE) and linkage equilibrium (LE). It also investigates several strategies for handling ECR to mitigate their impact and to yield accurate parameter estimates. My analytical work, assisted by simulations, shows that ECR have large and global effects on all of the above marker analyses. The naïve approach of simply ignoring ECR could yield low-precision and often biased parameter estimates, and could cause too many false rejections of HWE and LE. The bold approach, which simply identifies and removes ECR, and the cautious approach, which estimates target parameters (e.g., H e ) by accounting for ECR and using naïve allele frequency estimates, eliminate the bias and the false HWE and LE rejections, but could reduce estimation precision substantially. The likelihood approach, which accounts for ECR in estimating allele frequencies and thus target parameters relying on allele frequencies, usually yields unbiased and the most accurate parameter estimates. Which of the four approaches is the most effective and efficient may depend on the particular marker analysis to be conducted. The results are discussed in the context of using marker data for understanding population properties and marker properties. © 2017 John Wiley & Sons Ltd.
A Preliminary Attempt at Sintering an Ultrafine Alumina Powder Using Microwaves
1994-09-01
and unusual properties [Ref. B4]. Dielectric properties of individual ceramic phases differ depending on parameters such as compositicn...useful parameter is an estimate of the amount of power dissipated into a dielectric with a known effective loss factor. For a high frequency electric...cavities, and their influence in ceramic samples must be considered. Therefore scattering, diffraction, interference, and reflection and refraction
Brinker, T; Raymond, B; Bijma, P; Vereijken, A; Ellen, E D
2017-02-01
Mortality of laying hens due to cannibalism is a major problem in the egg-laying industry. Survival depends on two genetic effects: the direct genetic effect of the individual itself (DGE) and the indirect genetic effects of its group mates (IGE). For hens housed in sire-family groups, DGE and IGE cannot be estimated using pedigree information, but the combined effect of DGE and IGE is estimated in the total breeding value (TBV). Genomic information provides information on actual genetic relationships between individuals and might be a tool to improve TBV accuracy. We investigated whether genomic information of the sire increased TBV accuracy compared with pedigree information, and we estimated genetic parameters for survival time. A sire model with pedigree information (BLUP) and a sire model with genomic information (ssGBLUP) were used. We used survival time records of 7290 crossbred offspring with intact beaks from four crosses. Cross-validation was used to compare the models. Using ssGBLUP did not improve TBV accuracy compared with BLUP which is probably due to the limited number of sires available per cross (~50). Genetic parameter estimates were similar for BLUP and ssGBLUP. For both BLUP and ssGBLUP, total heritable variance (T 2 ), expressed as a proportion of phenotypic variance, ranged from 0.03 ± 0.04 to 0.25 ± 0.09. Further research is needed on breeding value estimation for socially affected traits measured on individuals kept in single-family groups. © 2016 The Authors. Journal of Animal Breeding and Genetics Published by Blackwell Verlag GmbH.
Sample size determination for GEE analyses of stepped wedge cluster randomized trials.
Li, Fan; Turner, Elizabeth L; Preisser, John S
2018-06-19
In stepped wedge cluster randomized trials, intact clusters of individuals switch from control to intervention from a randomly assigned period onwards. Such trials are becoming increasingly popular in health services research. When a closed cohort is recruited from each cluster for longitudinal follow-up, proper sample size calculation should account for three distinct types of intraclass correlations: the within-period, the inter-period, and the within-individual correlations. Setting the latter two correlation parameters to be equal accommodates cross-sectional designs. We propose sample size procedures for continuous and binary responses within the framework of generalized estimating equations that employ a block exchangeable within-cluster correlation structure defined from the distinct correlation types. For continuous responses, we show that the intraclass correlations affect power only through two eigenvalues of the correlation matrix. We demonstrate that analytical power agrees well with simulated power for as few as eight clusters, when data are analyzed using bias-corrected estimating equations for the correlation parameters concurrently with a bias-corrected sandwich variance estimator. © 2018, The International Biometric Society.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less
MIXOR: a computer program for mixed-effects ordinal regression analysis.
Hedeker, D; Gibbons, R D
1996-03-01
MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.
NASA Astrophysics Data System (ADS)
Wayson, Michael B.; Bolch, Wesley E.
2018-04-01
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
Wayson, Michael B; Bolch, Wesley E
2018-04-13
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
A robust measure of HIV-1 population turnover within chronically infected individuals.
Achaz, G; Palmer, S; Kearney, M; Maldarelli, F; Mellors, J W; Coffin, J M; Wakeley, J
2004-10-01
A simple nonparameteric test for population structure was applied to temporally spaced samples of HIV-1 sequences from the gag-pol region within two chronically infected individuals. The results show that temporal structure can be detected for samples separated by about 22 months or more. The performance of the method, which was originally proposed to detect geographic structure, was tested for temporally spaced samples using neutral coalescent simulations. Simulations showed that the method is robust to variation in samples sizes and mutation rates, to the presence/absence of recombination, and that the power to detect temporal structure is high. By comparing levels of temporal structure in simulations to the levels observed in real data, we estimate the effective intra-individual population size of HIV-1 to be between 10(3) and 10(4) viruses, which is in agreement with some previous estimates. Using this estimate and a simple measure of sequence diversity, we estimate an effective neutral mutation rate of about 5 x 10(-6) per site per generation in the gag-pol region. The definition and interpretation of estimates of such "effective" population parameters are discussed.
Gould, William R.; Kendall, William L.
2013-01-01
Capture-recapture methods were initially developed to estimate human population abundance, but since that time have seen widespread use for fish and wildlife populations to estimate and model various parameters of population, metapopulation, and disease dynamics. Repeated sampling of marked animals provides information for estimating abundance and tracking the fate of individuals in the face of imperfect detection. Mark types have evolved from clipping or tagging to use of noninvasive methods such as photography of natural markings and DNA collection from feces. Survival estimation has been emphasized more recently as have transition probabilities between life history states and/or geographical locations, even where some states are unobservable or uncertain. Sophisticated software has been developed to handle highly parameterized models, including environmental and individual covariates, to conduct model selection, and to employ various estimation approaches such as maximum likelihood and Bayesian approaches. With these user-friendly tools, complex statistical models for studying population dynamics have been made available to ecologists. The future will include a continuing trend toward integrating data types, both for tagged and untagged individuals, to produce more precise and robust population models.
Tradeoffs among watershed model calibration targets for parameter estimation
Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...
Schröder, Arne; Kalinkat, Gregor; Arlinghaus, Robert
2016-12-01
Functional responses are per-capita feeding rate models whose parameters often scale with individual body size but the parameters may also be further influenced by behavioural traits consistently differing among individuals, i.e. behavioural types or animal personalities. Behavioural types may intrinsically lead to lower feeding rates when consistently shy, inactive and easily stressed individuals cannot identify or respond to risk-free environments or need less food due to lower metabolic rates linked to behaviour. To test how much variation in functional response parameters is explained by body size and how much by behavioural types, we estimated attack rate and handling time individually for differently sized female least killifish (Heterandria formosa) and repeatedly measured behavioural traits for each individual. We found that individual fish varied substantially in their attack rate and in their handling time. Behavioural traits were stable over time and varied consistently among individuals along two distinct personality axes. The individual variation in functional responses was explained solely by body size, and contrary to our expectations, not additionally by the existing behavioural types in exploration activity and coping style. While behavioural trait-dependent functional responses may offer a route to the understanding of the food web level consequences of behavioural types, our study is so far only the second one on this topic. Importantly, our results indicate in contrast to that previous study that behavioural types do not per se affect individual functional responses assessed in the absence of external biotic stressors.
NASA Astrophysics Data System (ADS)
Pleban, J. R.; Mackay, D. S.; Ewers, B. E.; Weinig, C.; Aston, T.
2015-12-01
Challenges in terrestrial ecosystem modeling include characterizing the impact of stress on vegetation and the heterogeneous behavior of different species within the environment. In an effort to address these challenges the impacts of drought and nutrient limitation on the CO2 assimilation of multiple genotypes of Brassica rapa was investigated using the Farquhar Model (FM) of photosynthesis following a Bayesian parameterization and updating scheme. Leaf gas exchange and chlorophyll fluorescence measurements from an unstressed group (well-watered/well-fertilized) and two stressed groups (drought/well-fertilized and well-watered/nutrient limited) were used to estimate FM model parameters. Unstressed individuals were used to initialize Bayesian parameter estimation. Posterior mean estimates yielded a close fit with data as observed assimilation (An) closely matched predicted (Ap) with mean standard error for all individuals ranging from 0.8 to 3.1 μmol CO2 m-2 s-1. Posterior parameter distributions of the unstressed individuals were combined and fit to distributions to establish species level Bayesian priors of FM parameters for testing stress responses. Species level distributions of unstressed group identified mean maximum rates of carboxylation standardized to 25° (Vcmax25) as 101.8 μmol m-2 s-1 (± 29.0) and mean maximum rates of electron transport standardized to 25° (Jmax25) as 319.7 μmol m-2 s-1 (± 64.4). These updated priors were used to test the response of drought and nutrient limitations on assimilation. In the well-watered/nutrient limited group a decrease of 28.0 μmol m-2 s-1 was observed in mean estimate of Vcmax25, a decrease of 27.9 μmol m-2 s-1 in Jmax25 and a decrease in quantum yield from 0.40 mol photon/mol e- in unstressed individuals to 0.14 in the nutrient limited group. In the drought/well-fertilized group a decrease was also observed in Vcmax25 and Jmax25. The genotype specific unstressed and stressed responses were then used to parameterize an ecosystem process model with application at the field scale to investigate mechanisms of stress response in B. rapa by testing a variety of functional forms to limit assimilation in hydraulic or nutrient limited conditions.
A Functional Varying-Coefficient Single-Index Model for Functional Response Data
Li, Jialiang; Huang, Chao; Zhu, Hongtu
2016-01-01
Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. PMID:29200540
A Functional Varying-Coefficient Single-Index Model for Functional Response Data.
Li, Jialiang; Huang, Chao; Zhu, Hongtu
2017-01-01
Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer's Disease Neuroimaging Initiative (ADNI) study.
[The maximum heart rate in the exercise test: the 220-age formula or Sheffield's table?].
Mesquita, A; Trabulo, M; Mendes, M; Viana, J F; Seabra-Gomes, R
1996-02-01
To determine in the maximum cardiac rate in exercise test of apparently healthy individuals may be more properly estimated through 220-age formula (Astrand) or the Sheffield table. Retrospective analysis of clinical history and exercises test of apparently healthy individuals submitted to cardiac check-up. Sequential sampling of 170 healthy individuals submitted to cardiac check-up between April 1988 and September 1992. Comparison of maximum cardiac rate of individuals studied by the protocols of Bruce and modified Bruce, in interrupted exercise test by fatigue, and with the estimated values by the formulae: 220-age versus Sheffield table. The maximum cardiac heart rate is similar with both protocols. This parameter in normal individuals is better predicted by the 220-age formula. The theoretic maximum cardiac heart rate determined by 220-age formula should be recommended for a healthy, and for this reason the Sheffield table has been excluded from our clinical practice.
Gupta, Manan; Joshi, Amitabh; Vidya, T N C
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species.
Joshi, Amitabh; Vidya, T. N. C.
2017-01-01
Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates. Therefore, the effect of social organization on bias in population estimation could be removed by using POPAN with specific parameter combinations, to obtain population size estimates in a social species. PMID:28306735
Adverse Selection and an Individual Mandate: When Theory Meets Practice*
Hackmann, Martin B.; Kolstad, Jonathan T.; Kowalski, Amanda E.
2014-01-01
We develop a model of selection that incorporates a key element of recent health reforms: an individual mandate. Using data from Massachusetts, we estimate the parameters of the model. In the individual market for health insurance, we find that premiums and average costs decreased significantly in response to the individual mandate. We find an annual welfare gain of 4.1% per person or $51.1 million annually in Massachusetts as a result of the reduction in adverse selection. We also find smaller post-reform markups. PMID:25914412
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Multirate state and parameter estimation in an antibiotic fermentation with delayed measurements.
Gudi, R D; Shah, S L; Gray, M R
1994-12-01
This article discusses issues related to estimation and monitoring of fermentation processes that exhibit endogenous metabolism and time-varying maintenance activity. Such culture-related activities hamper the use of traditional, software sensor-based algorithms, such as the extended kalman filter (EKF). In the approach presented here, the individual effects of the endogenous decay and the true maintenance processes have been lumped to represent a modified maintenance coefficient, m(c). Model equations that relate measurable process outputs, such as the carbon dioxide evolution rate (CER) and biomass, to the observable process parameters (such as net specific growth rate and the modified maintenance coefficient) are proposed. These model equations are used in an estimator that can formally accommodate delayed, infrequent measurements of the culture states (such as the biomass) as well as frequent, culture-related secondary measurements (such as the CER). The resulting multirate software sensor-based estimation strategy is used to monitor biomass profiles as well as profiles of critical fermentation parameters, such as the specific growth for a fed-batch fermentation of Streptomyces clavuligerus.
Simon, Steven L; Hoffman, F Owen; Hofer, Eduard
2015-01-01
Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.
Morin, Dana J.; Fuller, Angela K.; Royle, J. Andrew; Sutherland, Chris
2017-01-01
Conservation and management of spatially structured populations is challenging because solutions must consider where individuals are located, but also differential individual space use as a result of landscape heterogeneity. A recent extension of spatial capture–recapture (SCR) models, the ecological distance model, uses spatial encounter histories of individuals (e.g., a record of where individuals are detected across space, often sequenced over multiple sampling occasions), to estimate the relationship between space use and characteristics of a landscape, allowing simultaneous estimation of both local densities of individuals across space and connectivity at the scale of individual movement. We developed two model-based estimators derived from the SCR ecological distance model to quantify connectivity over a continuous surface: (1) potential connectivity—a metric of the connectivity of areas based on resistance to individual movement; and (2) density-weighted connectivity (DWC)—potential connectivity weighted by estimated density. Estimates of potential connectivity and DWC can provide spatial representations of areas that are most important for the conservation of threatened species, or management of abundant populations (i.e., areas with high density and landscape connectivity), and thus generate predictions that have great potential to inform conservation and management actions. We used a simulation study with a stationary trap design across a range of landscape resistance scenarios to evaluate how well our model estimates resistance, potential connectivity, and DWC. Correlation between true and estimated potential connectivity was high, and there was positive correlation and high spatial accuracy between estimated DWC and true DWC. We applied our approach to data collected from a population of black bears in New York, and found that forested areas represented low levels of resistance for black bears. We demonstrate that formal inference about measures of landscape connectivity can be achieved from standard methods of studying animal populations which yield individual encounter history data such as camera trapping. Resulting biological parameters including resistance, potential connectivity, and DWC estimate the spatial distribution and connectivity of the population within a statistical framework, and we outline applications to many possible conservation and management problems.
Estimation of real-time runway surface contamination using flight data recorder parameters
NASA Astrophysics Data System (ADS)
Curry, Donovan
Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.
Multivariate meta-analysis with an increasing number of parameters.
Boca, Simina M; Pfeiffer, Ruth M; Sampson, Joshua N
2017-05-01
Meta-analysis can average estimates of multiple parameters, such as a treatment's effect on multiple outcomes, across studies. Univariate meta-analysis (UVMA) considers each parameter individually, while multivariate meta-analysis (MVMA) considers the parameters jointly and accounts for the correlation between their estimates. The performance of MVMA and UVMA has been extensively compared in scenarios with two parameters. Our objective is to compare the performance of MVMA and UVMA as the number of parameters, p, increases. Specifically, we show that (i) for fixed-effect (FE) meta-analysis, the benefit from using MVMA can substantially increase as p increases; (ii) for random effects (RE) meta-analysis, the benefit from MVMA can increase as p increases, but the potential improvement is modest in the presence of high between-study variability and the actual improvement is further reduced by the need to estimate an increasingly large between study covariance matrix; and (iii) when there is little to no between-study variability, the loss of efficiency due to choosing RE MVMA over FE MVMA increases as p increases. We demonstrate these three features through theory, simulation, and a meta-analysis of risk factors for non-Hodgkin lymphoma. © Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Troy: A simple nonlinear mathematical perspective
NASA Astrophysics Data System (ADS)
Flores, J. C.; Bologna, Mauro
2013-10-01
In this paper, we propose a mathematical model for the Trojan war that, supposedly, took place around 1180 BC. Supported by archaeological findings and by Homer’s Iliad, we estimate the numbers of warriors, the struggle rate parameters, the number of individuals per hectare, and other related quantities. We show that the long siege of the city, described in the Iliad, is compatible with a power-law behaviour for the time evolution of the number of individuals. We are able to evaluate the parameters of our model during the phase of the siege and the fall. The proposed model is general, and it can be applied to other historical conflicts.
Acoustic characteristics of voice after severe traumatic brain injury.
McHenry, M
2000-07-01
To describe the acoustic characteristics of voice in individuals with motor speech disorders after traumatic brain injury (TBI). Prospective study of 100 individuals with TBI based on consecutive referrals for motor speech evaluations. Subjects were audio tape-recorded while producing sustained vowels and single word and sentence intelligibility tests. Laryngeal airway resistance was estimated, and voice quality was rated perceptually. None of the subjects evidenced vocal parameters within normal limits. The most frequently occurring abnormal parameter across subjects was amplitude perturbation, followed by voice turbulence index. Twenty-three percent of subjects evidenced deviation in all five parameters measured. The perceptual ratings of breathiness were significantly correlated with both the amplitude perturbation quotient and the noise-to-harmonics ratio. Vocal quality deviation is common in motor speech disorders after TBI and may impact intelligibility.
The Utility of Selection for Military and Civilian Jobs
1989-07-01
parsimonious use of information; the relative ease in making threshold (break-even) judgments compared to estimating actual SDy values higher than a... threshold value, even though judges are unlikely to agree on the exact point estimate for the SDy parameter; and greater understanding of how even small...ability, spatial ability, introversion , anxiety) considered to vary or differ across individuals. A construct (sometimes called a latent variable) is not
Multi-scale comparison of source parameter estimation using empirical Green's function approach
NASA Astrophysics Data System (ADS)
Chen, X.; Cheng, Y.
2015-12-01
Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.
Gardner, Beth; Reppucci, Juan; Lucherini, Mauro; Royle, J. Andrew
2010-01-01
We develop a hierarchical capture–recapture model for demographically open populations when auxiliary spatial information about location of capture is obtained. Such spatial capture–recapture data arise from studies based on camera trapping, DNA sampling, and other situations in which a spatial array of devices records encounters of unique individuals. We integrate an individual-based formulation of a Jolly-Seber type model with recently developed spatially explicit capture–recapture models to estimate density and demographic parameters for survival and recruitment. We adopt a Bayesian framework for inference under this model using the method of data augmentation which is implemented in the software program WinBUGS. The model was motivated by a camera trapping study of Pampas cats Leopardus colocolo from Argentina, which we present as an illustration of the model in this paper. We provide estimates of density and the first quantitative assessment of vital rates for the Pampas cat in the High Andes. The precision of these estimates is poor due likely to the sparse data set. Unlike conventional inference methods which usually rely on asymptotic arguments, Bayesian inferences are valid in arbitrary sample sizes, and thus the method is ideal for the study of rare or endangered species for which small data sets are typical.
Gardner, Beth; Reppucci, Juan; Lucherini, Mauro; Royle, J Andrew
2010-11-01
We develop a hierarchical capture-recapture model for demographically open populations when auxiliary spatial information about location of capture is obtained. Such spatial capture-recapture data arise from studies based on camera trapping, DNA sampling, and other situations in which a spatial array of devices records encounters of unique individuals. We integrate an individual-based formulation of a Jolly-Seber type model with recently developed spatially explicit capture-recapture models to estimate density and demographic parameters for survival and recruitment. We adopt a Bayesian framework for inference under this model using the method of data augmentation which is implemented in the software program WinBUGS. The model was motivated by a camera trapping study of Pampas cats Leopardus colocolo from Argentina, which we present as an illustration of the model in this paper. We provide estimates of density and the first quantitative assessment of vital rates for the Pampas cat in the High Andes. The precision of these estimates is poor due likely to the sparse data set. Unlike conventional inference methods which usually rely on asymptotic arguments, Bayesian inferences are valid in arbitrary sample sizes, and thus the method is ideal for the study of rare or endangered species for which small data sets are typical.
Puttaswamy, Kavitha A.; Puttabudhi, Jaishankar H.; Raju, Shashidara
2017-01-01
Aims and Objectives: The purpose of this study was to estimate and assess any correlation between random capillary blood glucose (RCBG) and unstimulated whole salivary glucose (UWSG), as well as to estimate various salivary parameters, such as flow rate, pH, buffering capacity, and the influence of these factors on the oral health status in type 2 diabetes mellitus (DM). Materials and Methods: Sixty individuals suffering from type 2 DM and 40 healthy individuals in the age group of 30–60 years were included in the study. RCBG was estimated using glucometer and UWSG was estimated using photocolorimeter. Salivary parameters such as flow rate, pH, and buffering capacity were assessed using GC® Saliva kit. Oral health status was recorded using the Russell's periodontal index (RPI) and the Decayed Missing Filled Teeth (DMFT) index. The Statistical Package for the Social Sciences version 16 was used for statistical analysis. Results: Type 2 diabetics had higher mean values for RCBG levels and UWSG. Type 2 diabetics had low mean salivary flow rate, pH, and buffering capacity. Type 2 diabetics had higher mean values for RPI. Conclusion: Among the salivary factors studied, salivary glucose significantly influenced the periodontal status in Type 2 diabetics. PMID:28316946
Puttaswamy, Kavitha A; Puttabudhi, Jaishankar H; Raju, Shashidara
2017-01-01
The purpose of this study was to estimate and assess any correlation between random capillary blood glucose (RCBG) and unstimulated whole salivary glucose (UWSG), as well as to estimate various salivary parameters, such as flow rate, pH, buffering capacity, and the influence of these factors on the oral health status in type 2 diabetes mellitus (DM). Sixty individuals suffering from type 2 DM and 40 healthy individuals in the age group of 30-60 years were included in the study. RCBG was estimated using glucometer and UWSG was estimated using photocolorimeter. Salivary parameters such as flow rate, pH, and buffering capacity were assessed using GC ® Saliva kit. Oral health status was recorded using the Russell's periodontal index (RPI) and the Decayed Missing Filled Teeth (DMFT) index. The Statistical Package for the Social Sciences version 16 was used for statistical analysis. Type 2 diabetics had higher mean values for RCBG levels and UWSG. Type 2 diabetics had low mean salivary flow rate, pH, and buffering capacity. Type 2 diabetics had higher mean values for RPI. Among the salivary factors studied, salivary glucose significantly influenced the periodontal status in Type 2 diabetics.
Estimating loss of Brucella abortus antibodies from age-specific serological data in elk
Benavides, J. A.; Caillaud, D.; Scurlock, B. M.; Maichak, E. J.; Edwards, W.H.; Cross, Paul C.
2017-01-01
Serological data are one of the primary sources of information for disease monitoring in wildlife. However, the duration of the seropositive status of exposed individuals is almost always unknown for many free-ranging host species. Directly estimating rates of antibody loss typically requires difficult longitudinal sampling of individuals following seroconversion. Instead, we propose a Bayesian statistical approach linking age and serological data to a mechanistic epidemiological model to infer brucellosis infection, the probability of antibody loss, and recovery rates of elk (Cervus canadensis) in the Greater Yellowstone Ecosystem. We found that seroprevalence declined above the age of ten, with no evidence of disease-induced mortality. The probability of antibody loss was estimated to be 0.70 per year after a five-year period of seropositivity and the basic reproduction number for brucellosis to 2.13. Our results suggest that individuals are unlikely to become re-infected because models with this mechanism were unable to reproduce a significant decline in seroprevalence in older individuals. This study highlights the possible implications of antibody loss, which could bias our estimation of critical epidemiological parameters for wildlife disease management based on serological data.
A combined surface/volume scattering retracking algorithm for ice sheet satellite altimetry
NASA Technical Reports Server (NTRS)
Davis, Curt H.
1992-01-01
An algorithm that is based upon a combined surface-volume scattering model is developed. It can be used to retrack individual altimeter waveforms over ice sheets. An iterative least-squares procedure is used to fit the combined model to the return waveforms. The retracking algorithm comprises two distinct sections. The first generates initial model parameter estimates from a filtered altimeter waveform. The second uses the initial estimates, the theoretical model, and the waveform data to generate corrected parameter estimates. This retracking algorithm can be used to assess the accuracy of elevations produced from current retracking algorithms when subsurface volume scattering is present. This is extremely important so that repeated altimeter elevation measurements can be used to accurately detect changes in the mass balance of the ice sheets. By analyzing the distribution of the model parameters over large portions of the ice sheet, regional and seasonal variations in the near-surface properties of the snowpack can be quantified.
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Patient-individualized boundary conditions for CFD simulations using time-resolved 3D angiography.
Boegel, Marco; Gehrisch, Sonja; Redel, Thomas; Rohkohl, Christopher; Hoelter, Philip; Doerfler, Arnd; Maier, Andreas; Kowarschik, Markus
2016-06-01
Hemodynamic simulations are of increasing interest for the assessment of aneurysmal rupture risk and treatment planning. Achievement of accurate simulation results requires the usage of several patient-individual boundary conditions, such as a geometric model of the vasculature but also individualized inflow conditions. We propose the automatic estimation of various parameters for boundary conditions for computational fluid dynamics (CFD) based on a single 3D rotational angiography scan, also showing contrast agent inflow. First the data are reconstructed, and a patient-specific vessel model can be generated in the usual way. For this work, we optimize the inflow waveform based on two parameters, the mean velocity and pulsatility. We use statistical analysis of the measurable velocity distribution in the vessel segment to estimate the mean velocity. An iterative optimization scheme based on CFD and virtual angiography is utilized to estimate the inflow pulsatility. Furthermore, we present methods to automatically determine the heart rate and synchronize the inflow waveform to the patient's heart beat, based on time-intensity curves extracted from the rotational angiogram. This will result in a patient-individualized inflow velocity curve. The proposed methods were evaluated on two clinical datasets. Based on the vascular geometries, synthetic rotational angiography data was generated to allow a quantitative validation of our approach against ground truth data. We observed an average error of approximately [Formula: see text] for the mean velocity, [Formula: see text] for the pulsatility. The heart rate was estimated very precisely with an average error of about [Formula: see text], which corresponds to about 6 ms error for the duration of one cardiac cycle. Furthermore, a qualitative comparison of measured time-intensity curves from the real data and patient-specific simulated ones shows an excellent match. The presented methods have the potential to accurately estimate patient-specific boundary conditions from a single dedicated rotational scan.
Alonso-Valerdi, Luz María
2016-01-01
A brain-computer interface (BCI) aims to establish communication between the human brain and a computing system so as to enable the interaction between an individual and his environment without using the brain output pathways. Individuals control a BCI system by modulating their brain signals through mental tasks (e.g., motor imagery or mental calculation) or sensory stimulation (e.g., auditory, visual, or tactile). As users modulate their brain signals at different frequencies and at different levels, the appropriate characterization of those signals is necessary. The modulation of brain signals through mental tasks is furthermore a skill that requires training. Unfortunately, not all the users acquire such skill. A practical solution to this problem is to assess the user probability of controlling a BCI system. Another possible solution is to set the bandwidth of the brain oscillations, which is highly sensitive to the users' age, sex and anatomy. With this in mind, NeuroIndex, a Python executable script, estimates a neurophysiological prediction index and the individual alpha frequency (IAF) of the user in question. These two parameters are useful to characterize the user EEG signals, and decide how to go through the complex process of adapting the human brain and the computing system on the basis of previously proposed methods. NeuroIndeX is not only the implementation of those methods, but it also complements the methods each other and provides an alternative way to obtain the prediction parameter. However, an important limitation of this application is its dependency on the IAF value, and some results should be interpreted with caution. The script along with some electroencephalographic datasets are available on a GitHub repository in order to corroborate the functionality and usability of this application.
Alonso-Valerdi, Luz María
2016-01-01
A brain-computer interface (BCI) aims to establish communication between the human brain and a computing system so as to enable the interaction between an individual and his environment without using the brain output pathways. Individuals control a BCI system by modulating their brain signals through mental tasks (e.g., motor imagery or mental calculation) or sensory stimulation (e.g., auditory, visual, or tactile). As users modulate their brain signals at different frequencies and at different levels, the appropriate characterization of those signals is necessary. The modulation of brain signals through mental tasks is furthermore a skill that requires training. Unfortunately, not all the users acquire such skill. A practical solution to this problem is to assess the user probability of controlling a BCI system. Another possible solution is to set the bandwidth of the brain oscillations, which is highly sensitive to the users' age, sex and anatomy. With this in mind, NeuroIndex, a Python executable script, estimates a neurophysiological prediction index and the individual alpha frequency (IAF) of the user in question. These two parameters are useful to characterize the user EEG signals, and decide how to go through the complex process of adapting the human brain and the computing system on the basis of previously proposed methods. NeuroIndeX is not only the implementation of those methods, but it also complements the methods each other and provides an alternative way to obtain the prediction parameter. However, an important limitation of this application is its dependency on the IAF value, and some results should be interpreted with caution. The script along with some electroencephalographic datasets are available on a GitHub repository in order to corroborate the functionality and usability of this application. PMID:27445783
NASA Astrophysics Data System (ADS)
Campante, T. L.; Handberg, R.; Mathur, S.; Appourchaux, T.; Bedding, T. R.; Chaplin, W. J.; García, R. A.; Mosser, B.; Benomar, O.; Bonanno, A.; Corsaro, E.; Fletcher, S. T.; Gaulme, P.; Hekker, S.; Karoff, C.; Régulo, C.; Salabert, D.; Verner, G. A.; White, T. R.; Houdek, G.; Brandão, I. M.; Creevey, O. L.; Doǧan, G.; Bazot, M.; Christensen-Dalsgaard, J.; Cunha, M. S.; Elsworth, Y.; Huber, D.; Kjeldsen, H.; Lundkvist, M.; Molenda-Żakowicz, J.; Monteiro, M. J. P. F. G.; Stello, D.; Clarke, B. D.; Girouard, F. R.; Hall, J. R.
2011-10-01
Context. The evolved main-sequence Sun-like stars KIC 10273246 (F-type) and KIC 10920273 (G-type) were observed with the NASA Kepler satellite for approximately ten months with a duty cycle in excess of 90%. Such continuous and long observations are unprecedented for solar-type stars other than the Sun. Aims: We aimed mainly at extracting estimates of p-mode frequencies - as well as of other individual mode parameters - from the power spectra of the light curves of both stars, thus providing scope for a full seismic characterization. Methods: The light curves were corrected for instrumental effects in a manner independent of the Kepler science pipeline. Estimation of individual mode parameters was based both on the maximization of the likelihood of a model describing the power spectrum and on a classic prewhitening method. Finally, we employed a procedure for selecting frequency lists to be used in stellar modeling. Results: A total of 30 and 21 modes of degree l = 0,1,2 - spanning at least eight radial orders - have been identified for KIC 10273246 and KIC 10920273, respectively. Two avoided crossings (l = 1 ridge) have been identified for KIC 10273246, whereas one avoided crossing plus another likely one have been identified for KIC 10920273. Good agreement is found between observed and predicted mode amplitudes for the F-type star KIC 10273246, based on a revised scaling relation. Estimates are given of the rotational periods, the parameters describing stellar granulation and the global asteroseismic parameters Δν and νmax.
Avian seasonal productivity is often modeled as a time-limited stochastic process. Many mathematical formulations have been proposed, including individual based models, continuous-time differential equations, and discrete Markov models. All such models typically include paramete...
Welch, Stephen M.; White, Jeffrey W.; Thorp, Kelly R.; Bello, Nora M.
2018-01-01
Ecophysiological crop models encode intra-species behaviors using parameters that are presumed to summarize genotypic properties of individual lines or cultivars. These genotype-specific parameters (GSP’s) can be interpreted as quantitative traits that can be mapped or otherwise analyzed, as are more conventional traits. The goal of this study was to investigate the estimation of parameters controlling maize anthesis date with the CERES-Maize model, based on 5,266 maize lines from 11 plantings at locations across the eastern United States. High performance computing was used to develop a database of 356 million simulated anthesis dates in response to four CERES-Maize model parameters. Although the resulting estimates showed high predictive value (R2 = 0.94), three issues presented serious challenges for use of GSP’s as traits. First (expressivity), the model was unable to express the observed data for 168 to 3,339 lines (depending on the combination of site-years), many of which ended up sharing the same parameter value irrespective of genetics. Second, for 2,254 lines, the model reproduced the data, but multiple parameter sets were equally effective (equifinality). Third, parameter values were highly dependent (p<10−6919) on the sets of environments used to estimate them (instability), calling in to question the assumption that they represent fundamental genetic traits. The issues of expressivity, equifinality and instability must be addressed before the genetic mapping of GSP’s becomes a robust means to help solve the genotype-to-phenotype problem in crops. PMID:29672629
Ito, Tetsuya; Fukawa, Kazuo; Kamikawa, Mai; Nikaidou, Satoshi; Taniguchi, Masaaki; Arakawa, Aisaku; Tanaka, Genki; Mikawa, Satoshi; Furukawa, Tsutomu; Hirose, Kensuke
2018-01-01
Daily feed intake (DFI) is an important consideration for improving feed efficiency, but measurements using electronic feeder systems contain many missing and incorrect values. Therefore, we evaluated three methods for correcting missing DFI data (quadratic, orthogonal polynomial, and locally weighted (Loess) regression equations) and assessed the effects of these missing values on the genetic parameters and the estimated breeding values (EBV) for feeding traits. DFI records were obtained from 1622 Duroc pigs, comprising 902 individuals without missing DFI and 720 individuals with missing DFI. The Loess equation was the most suitable method for correcting the missing DFI values in 5-50% randomly deleted datasets among the three equations. Both variance components and heritability for the average DFI (ADFI) did not change because of the missing DFI proportion and Loess correction. In terms of rank correlation and information criteria, Loess correction improved the accuracy of EBV for ADFI compared to randomly deleted cases. These findings indicate that the Loess equation is useful for correcting missing DFI values for individual pigs and that the correction of missing DFI values could be effective for the estimation of breeding values and genetic improvement using EBV for feeding traits. © 2017 The Authors. Animal Science Journal published by John Wiley & Sons Australia, Ltd on behalf of Japanese Society of Animal Science.
Estimating crop net primary production using inventory data and MODIS-derived parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bandaru, Varaprasad; West, Tristram O.; Ricciuto, Daniel M.
2013-06-03
National estimates of spatially-resolved cropland net primary production (NPP) are needed for diagnostic and prognostic modeling of carbon sources, sinks, and net carbon flux. Cropland NPP estimates that correspond with existing cropland cover maps are needed to drive biogeochemical models at the local scale and over national and continental extents. Existing satellite-based NPP products tend to underestimate NPP on croplands. A new Agricultural Inventory-based Light Use Efficiency (AgI-LUE) framework was developed to estimate individual crop biophysical parameters for use in estimating crop-specific NPP. The method is documented here and evaluated for corn and soybean crops in Iowa and Illinois inmore » years 2006 and 2007. The method includes a crop-specific enhanced vegetation index (EVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS), shortwave radiation data estimated using Mountain Climate Simulator (MTCLIM) algorithm and crop-specific LUE per county. The combined aforementioned variables were used to generate spatially-resolved, crop-specific NPP that correspond to the Cropland Data Layer (CDL) land cover product. The modeling framework represented well the gradient of NPP across Iowa and Illinois, and also well represented the difference in NPP between years 2006 and 2007. Average corn and soybean NPP from AgI-LUE was 980 g C m-2 yr-1 and 420 g C m-2 yr-1, respectively. This was 2.4 and 1.1 times higher, respectively, for corn and soybean compared to the MOD17A3 NPP product. Estimated gross primary productivity (GPP) derived from AgI-LUE were in close agreement with eddy flux tower estimates. The combination of new inputs and improved datasets enabled the development of spatially explicit and reliable NPP estimates for individual crops over large regional extents.« less
Risk preferences impose a hidden distortion on measures of choice impulsivity
Konova, Anna B.; Louie, Kenway; Glimcher, Paul W.
2018-01-01
Measuring temporal discounting through the use of intertemporal choice tasks is now the gold standard method for quantifying human choice impulsivity (impatience) in neuroscience, psychology, behavioral economics, public health and computational psychiatry. A recent area of growing interest is individual differences in discounting levels, as these may predispose to (or protect from) mental health disorders, addictive behaviors, and other diseases. At the same time, more and more studies have been dedicated to the quantification of individual attitudes towards risk, which have been measured in many clinical and non-clinical populations using closely related techniques. Economists have pointed to interactions between measurements of time preferences and risk preferences that may distort estimations of the discount rate. However, although becoming standard practice in economics, discount rates and risk preferences are rarely measured simultaneously in the same subjects in other fields, and the magnitude of the imposed distortion is unknown in the assessment of individual differences. Here, we show that standard models of temporal discounting —such as a hyperbolic discounting model widely present in the literature which fails to account for risk attitudes in the estimation of discount rates— result in a large and systematic pattern of bias in estimated discounting parameters. This can lead to the spurious attribution of differences in impulsivity between individuals when in fact differences in risk attitudes account for observed behavioral differences. We advance a model which, when applied to standard choice tasks typically used in psychology and neuroscience, provides both a better fit to the data and successfully de-correlates risk and impulsivity parameters. This results in measures that are more accurate and thus of greater utility to the many fields interested in individual differences in impulsivity. PMID:29373590
Risk preferences impose a hidden distortion on measures of choice impulsivity.
Lopez-Guzman, Silvia; Konova, Anna B; Louie, Kenway; Glimcher, Paul W
2018-01-01
Measuring temporal discounting through the use of intertemporal choice tasks is now the gold standard method for quantifying human choice impulsivity (impatience) in neuroscience, psychology, behavioral economics, public health and computational psychiatry. A recent area of growing interest is individual differences in discounting levels, as these may predispose to (or protect from) mental health disorders, addictive behaviors, and other diseases. At the same time, more and more studies have been dedicated to the quantification of individual attitudes towards risk, which have been measured in many clinical and non-clinical populations using closely related techniques. Economists have pointed to interactions between measurements of time preferences and risk preferences that may distort estimations of the discount rate. However, although becoming standard practice in economics, discount rates and risk preferences are rarely measured simultaneously in the same subjects in other fields, and the magnitude of the imposed distortion is unknown in the assessment of individual differences. Here, we show that standard models of temporal discounting -such as a hyperbolic discounting model widely present in the literature which fails to account for risk attitudes in the estimation of discount rates- result in a large and systematic pattern of bias in estimated discounting parameters. This can lead to the spurious attribution of differences in impulsivity between individuals when in fact differences in risk attitudes account for observed behavioral differences. We advance a model which, when applied to standard choice tasks typically used in psychology and neuroscience, provides both a better fit to the data and successfully de-correlates risk and impulsivity parameters. This results in measures that are more accurate and thus of greater utility to the many fields interested in individual differences in impulsivity.
Powell, L.A.; Conroy, M.J.; Hines, J.E.; Nichols, J.D.; Krementz, D.G.
2000-01-01
Biologists often estimate separate survival and movement rates from radio-telemetry and mark-recapture data from the same study population. We describe a method for combining these data types in a single model to obtain joint, potentially less biased estimates of survival and movement that use all available data. We furnish an example using wood thrushes (Hylocichla mustelina) captured at the Piedmont National Wildlife Refuge in central Georgia in 1996. The model structure allows estimation of survival and capture probabilities, as well as estimation of movements away from and into the study area. In addition, the model structure provides many possibilities for hypothesis testing. Using the combined model structure, we estimated that wood thrush weekly survival was 0.989 ? 0.007 ( ?SE). Survival rates of banded and radio-marked individuals were not different (alpha hat [S_radioed, ~ S_banded]=log [S hat _radioed/ S hat _banded]=0.0239 ? 0.0435). Fidelity rates (weekly probability of remaining in a stratum) did not differ between geographic strata (psi hat=0.911 ? 0.020; alpha hat [psi11, psi22]=0.0161 ? 0.047), and recapture rates ( = 0.097 ? 0.016) banded and radio-marked individuals were not different (alpha hat [p_radioed, p_banded]=0.145 ? 0.655). Combining these data types in a common model resulted in more precise estimates of movement and recapture rates than separate estimation, but ability to detect stratum or mark-specific differences in parameters was week. We conducted simulation trials to investigate the effects of varying study designs on parameter accuracy and statistical power to detect important differences. Parameter accuracy was high (relative bias [RBIAS] <2 %) and confidence interval coverage close to nominal, except for survival estimates of banded birds for the 'off study area' stratum, which were negatively biased (RBIAS -7 to -15%) when sample sizes were small (5-10 banded or radioed animals 'released' per time interval). To provide adequate data for useful inference from this model, study designs should seek a minimum of 25 animals of each marking type observed (marked or observed via telemetry) in each time period and geographic stratum.
Smooth individual level covariates adjustment in disease mapping.
Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise
2018-05-01
Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Occam's shadow: levels of analysis in evolutionary ecology - where to next?
Cooch, E.G.; Cam, E.; Link, W.A.
2002-01-01
Evolutionary ecology is the study of evolutionary processes, and the ecological conditions that influence them. A fundamental paradigm underlying the study of evolution is natural selection. Although there are a variety of operational definitions for natural selection in the literature, perhaps the most general one is that which characterizes selection as the process whereby heritable variation in fitness associated with variation in one or more phenotypic traits leads to intergenerational change in the frequency distribution of those traits. The past 20 years have witnessed a marked increase in the precision and reliability of our ability to estimate one or more components of fitness and characterize natural selection in wild populations, owing particularly to significant advances in methods for analysis of data from marked individuals. In this paper, we focus on several issues that we believe are important considerations for the application and development of these methods in the context of addressing questions in evolutionary ecology. First, our traditional approach to estimation often rests upon analysis of aggregates of individuals, which in the wild may reflect increasingly non-random (selected) samples with respect to the trait(s) of interest. In some cases, analysis at the aggregate level, rather than the individual level, may obscure important patterns. While there are a growing number of analytical tools available to estimate parameters at the individual level, and which can cope (to varying degrees) with progressive selection of the sample, the advent of new methods does not reduce the need to consider carefully the appropriate level of analysis in the first place. Estimation should be motivated a priori by strong theoretical analysis. Doing so provides clear guidance, in terms of both (i) assisting in the identification of realistic and meaningful models to include in the candidate model set, and (ii) providing the appropriate context under which the results are interpreted. Second, while it is true that selection (as defined) operates at the level of the individual, the selection gradient is often (if not generally) conditional on the abundance of the population. As such, it may be important to consider estimating transition rates conditional on both the parameter values of the other individuals in the population (or at least their distribution), and population abundance. This will undoubtedly pose a considerable challenge, for both single- and multi-strata applications. It will also require renewed consideration of the estimation of abundance, especially for open populations. Thirdly, selection typically operates on dynamic, individually varying traits. Such estimation may require characterizing fitness in terms of individual plasticity in one or more state variables, constituting analysis of the norms of reaction of individuals to variable environments. This can be quite complex, especially for traits that are under facultative control. Recent work has indicated that the pattern of selection on such traits is conditional on the relative rates of movement among and frequency of spatially heterogeneous habitats, suggesting analyses of evolution of life histories in open populations can be misleading in some cases.
Deter, Russell L.; Lee, Wesley; Yeo, Lami; Romero, Roberto
2012-01-01
Objectives To characterize 2nd and 3rd trimester fetal growth using Individualized Growth Assessment in a large cohort of fetuses with normal growth outcomes. Methods A prospective longitudinal study of 119 pregnancies was carried out from 18 weeks, MA, to delivery. Measurements of eleven fetal growth parameters were obtained from 3D scans at 3–4 week intervals. Regression analyses were used to determine Start Points [SP] and Rossavik model [P = c (t) k + st] coefficients c, k and s for each parameter in each fetus. Second trimester growth model specification functions were re-established. These functions were used to generate individual growth models and determine predicted s and s-residual [s = pred s + s-resid] values. Actual measurements were compared to predicted growth trajectories obtained from the growth models and Percent Deviations [% Dev = {{actual − predicted}/predicted} × 100] calculated. Age-specific reference standards for this statistic were defined using 2-level statistical modeling for the nine directly measured parameters and estimated weight. Results Rossavik models fit the data for all parameters very well [R2: 99%], with SP’s and k values similar to those found in a much smaller cohort. The c values were strongly related to the 2nd trimester slope [R2: 97%] as was predicted s to estimated c [R2: 95%]. The latter was negative for skeletal parameters and positive for soft tissue parameters. The s-residuals were unrelated to estimated c’s [R2: 0%], and had mean values of zero. Rossavik models predicted 3rd trimester growth with systematic errors close to 0% and random errors [95% range] of 5.7 – 10.9% and 20.0 – 24.3% for one and three dimensional parameters, respectively. Moderate changes in age-specific variability were seen in the 3rd trimester.. Conclusions IGA procedures for evaluating 2nd and 3rd trimester growth are now established based on a large cohort [4–6 fold larger than those used previously], thus permitting more reliable growth assessment with each fetus acting as its own control. New, more rigorously defined, age-specific standards for the evaluation of 3rd trimester growth deviations are now available for 10 anatomical parameters. Our results are also consistent with the predicted s and s-residual being representatives of growth controllers operating through the insulin-like growth factor [IGF] axis. PMID:23962305
Forensic parameters of the Investigator DIPplex kit (Qiagen) in six Mexican populations.
Martínez-Cortés, G; García-Aceves, M; Favela-Mendoza, A F; Muñoz-Valle, J F; Velarde-Felix, J S; Rangel-Villalobos, H
2016-05-01
Allele frequencies and statistical parameters of forensic efficiency for 30 deletion-insertion polymorphisms (DIPs) were estimated in six Mexican populations. For this purpose, 421 unrelated individuals were analyzed with the Investigator DIPplex kit. The Hardy-Weinberg and linkage equilibrium was demonstrated for this 30-plex system in all six populations. We estimated the combined power of discrimination (PD ≥ 99.999999%) and combined power of exclusion (PE ≥ 98.632705%) for this genetic system. A low but significant genetic structure was demonstrated among these six populations by pairwise comparisons and AMOVA (F ST ≥ 0.7054; p ≤ 0.0007), which allows clustering populations in agreement with geographical criteria: Northwest, Center, and Southeast.
Dynamical Analysis of an SEIT Epidemic Model with Application to Ebola Virus Transmission in Guinea.
Li, Zhiming; Teng, Zhidong; Feng, Xiaomei; Li, Yingke; Zhang, Huiguo
2015-01-01
In order to investigate the transmission mechanism of the infectious individual with Ebola virus, we establish an SEIT (susceptible, exposed in the latent period, infectious, and treated/recovery) epidemic model. The basic reproduction number is defined. The mathematical analysis on the existence and stability of the disease-free equilibrium and endemic equilibrium is given. As the applications of the model, we use the recognized infectious and death cases in Guinea to estimate parameters of the model by the least square method. With suitable parameter values, we obtain the estimated value of the basic reproduction number and analyze the sensitivity and uncertainty property by partial rank correlation coefficients.
DuVal, Ashley; Gezan, Salvador A.; Mustiga, Guiliana; Stack, Conrad; Marelli, Jean-Philippe; Chaparro, José; Livingstone, Donald; Royaert, Stefan; Motamayor, Juan C.
2017-01-01
Breeding programs of cacao (Theobroma cacao L.) trees share the many challenges of breeding long-living perennial crops, and genetic progress is further constrained by both the limited understanding of the inheritance of complex traits and the prevalence of technical issues, such as mislabeled individuals (off-types). To better understand the genetic architecture of cacao, in this study, 13 years of phenotypic data collected from four progeny trials in Bahia, Brazil were analyzed jointly in a multisite analysis. Three separate analyses (multisite, single site with and without off-types) were performed to estimate genetic parameters from statistical models fitted on nine important agronomic traits (yield, seed index, pod index, % healthy pods, % pods infected with witches broom, % of pods other loss, vegetative brooms, diameter, and tree height). Genetic parameters were estimated along with variance components and heritabilities from the multisite analysis, and a trial was fingerprinted with low-density SNP markers to determine the impact of off-types on estimations. Heritabilities ranged from 0.37 to 0.64 for yield and its components and from 0.03 to 0.16 for disease resistance traits. A weighted index was used to make selections for clonal evaluation, and breeding values estimated for the parental selection and estimation of genetic gain. The impact of off-types to breeding progress in cacao was assessed for the first time. Even when present at <5% of the total population, off-types altered selections by 48%, and impacted heritability estimations for all nine of the traits analyzed, including a 41% difference in estimated heritability for yield. These results show that in a mixed model analysis, even a low level of pedigree error can significantly alter estimations of genetic parameters and selections in a breeding program. PMID:29250097
Aggregate and individual replication probability within an explicit model of the research process.
Miller, Jeff; Schwarz, Wolf
2011-09-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.
Lee, Yu; Yu, Chanki; Lee, Sang Wook
2018-01-10
We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.
Jacobson, Eiren K; Forney, Karin A; Barlow, Jay
2017-01-01
Passive acoustic monitoring is a promising approach for monitoring long-term trends in harbor porpoise (Phocoena phocoena) abundance. Before passive acoustic monitoring can be implemented to estimate harbor porpoise abundance, information about the detectability of harbor porpoise is needed to convert recorded numbers of echolocation clicks to harbor porpoise densities. In the present study, paired data from a grid of nine passive acoustic click detectors (C-PODs, Chelonia Ltd., United Kingdom) and three days of simultaneous aerial line-transect visual surveys were collected over a 370 km 2 study area. The focus of the study was estimating the effective detection area of the passive acoustic sensors, which was defined as the product of the sound production rate of individual animals and the area within which those sounds are detected by the passive acoustic sensors. Visually estimated porpoise densities were used as informative priors in a Bayesian model to solve for the effective detection area for individual harbor porpoises. This model-based approach resulted in a posterior distribution of the effective detection area of individual harbor porpoises consistent with previously published values. This technique is a viable alternative for estimating the effective detection area of passive acoustic sensors when other experimental approaches are not feasible.
Edwards, Ryan W J; Celia, Michael A; Bandilla, Karl W; Doster, Florian; Kanno, Cynthia M
2015-08-04
Recent studies suggest the possibility of CO2 sequestration in depleted shale gas formations, motivated by large storage capacity estimates in these formations. Questions remain regarding the dynamic response and practicality of injection of large amounts of CO2 into shale gas wells. A two-component (CO2 and CH4) model of gas flow in a shale gas formation including adsorption effects provides the basis to investigate the dynamics of CO2 injection. History-matching of gas production data allows for formation parameter estimation. Application to three shale gas-producing regions shows that CO2 can only be injected at low rates into individual wells and that individual well capacity is relatively small, despite significant capacity variation between shale plays. The estimated total capacity of an average Marcellus Shale well in Pennsylvania is 0.5 million metric tonnes (Mt) of CO2, compared with 0.15 Mt in an average Barnett Shale well. Applying the individual well estimates to the total number of existing and permitted planned wells (as of March, 2015) in each play yields a current estimated capacity of 7200-9600 Mt in the Marcellus Shale in Pennsylvania and 2100-3100 Mt in the Barnett Shale.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
An Admittance Survey of Large Volcanoes on Venus: Implications for Volcano Growth
NASA Technical Reports Server (NTRS)
Brian, A. W.; Smrekar, S. E.; Stofan, E. R.
2004-01-01
Estimates of the thickness of the venusian crust and elastic lithosphere are important in determining the rheological and thermal properties of Venus. These estimates offer insights into what conditions are needed for certain features, such as large volcanoes and coronae, to form. Lithospheric properties for much of the large volcano population on Venus are not well known. Previous studies of elastic thickness (Te) have concentrated on individual or small groups of edifices, or have used volcano models and fixed values of Te to match with observations of volcano morphologies. In addition, previous studies use different methods to estimate lithospheric parameters meaning it is difficult to compare their results. Following recent global studies of the admittance signatures exhibited by the venusian corona population, we performed a similar survey into large volcanoes in an effort to determine the range of lithospheric parameters shown by these features. This survey of the entire large volcano population used the same method throughout so that all estimates could be directly compared. By analysing a large number of edifices and comparing our results to observations of their morphology and models of volcano formation, we can help determine the controlling parameters that govern volcano growth on Venus.
Abad-Franch, Fernando; Ferraz, Gonçalo; Campos, Ciro; Palomeque, Francisco S.; Grijalva, Mario J.; Aguilar, H. Marcelo; Miles, Michael A.
2010-01-01
Background Failure to detect a disease agent or vector where it actually occurs constitutes a serious drawback in epidemiology. In the pervasive situation where no sampling technique is perfect, the explicit analytical treatment of detection failure becomes a key step in the estimation of epidemiological parameters. We illustrate this approach with a study of Attalea palm tree infestation by Rhodnius spp. (Triatominae), the most important vectors of Chagas disease (CD) in northern South America. Methodology/Principal Findings The probability of detecting triatomines in infested palms is estimated by repeatedly sampling each palm. This knowledge is used to derive an unbiased estimate of the biologically relevant probability of palm infestation. We combine maximum-likelihood analysis and information-theoretic model selection to test the relationships between environmental covariates and infestation of 298 Amazonian palm trees over three spatial scales: region within Amazonia, landscape, and individual palm. Palm infestation estimates are high (40–60%) across regions, and well above the observed infestation rate (24%). Detection probability is higher (∼0.55 on average) in the richest-soil region than elsewhere (∼0.08). Infestation estimates are similar in forest and rural areas, but lower in urban landscapes. Finally, individual palm covariates (accumulated organic matter and stem height) explain most of infestation rate variation. Conclusions/Significance Individual palm attributes appear as key drivers of infestation, suggesting that CD surveillance must incorporate local-scale knowledge and that peridomestic palm tree management might help lower transmission risk. Vector populations are probably denser in rich-soil sub-regions, where CD prevalence tends to be higher; this suggests a target for research on broad-scale risk mapping. Landscape-scale effects indicate that palm triatomine populations can endure deforestation in rural areas, but become rarer in heavily disturbed urban settings. Our methodological approach has wide application in infectious disease research; by improving eco-epidemiological parameter estimation, it can also significantly strengthen vector surveillance-control strategies. PMID:20209149
Trunk density profile estimates from dual X-ray absorptiometry.
Wicke, Jason; Dumas, Geneviève A; Costigan, Patrick A
2008-01-01
Accurate body segment parameters are necessary to estimate joint loads when using biomechanical models. Geometric methods can provide individualized data for these models but the accuracy of the geometric methods depends on accurate segment density estimates. The trunk, which is important in many biomechanical models, has the largest variability in density along its length. Therefore, the objectives of this study were to: (1) develop a new method for modeling trunk density profiles based on dual X-ray absorptiometry (DXA) and (2) develop a trunk density function for college-aged females and males that can be used in geometric methods. To this end, the density profiles of 25 females and 24 males were determined by combining the measurements from a photogrammetric method and DXA readings. A discrete Fourier transformation was then used to develop the density functions for each sex. The individual density and average density profiles compare well with the literature. There were distinct differences between the profiles of two of participants (one female and one male), and the average for their sex. It is believed that the variations in these two participants' density profiles were a result of the amount and distribution of fat they possessed. Further studies are needed to support this possibility. The new density functions eliminate the uniform density assumption associated with some geometric models thus providing more accurate trunk segment parameter estimates. In turn, more accurate moments and forces can be estimated for the kinetic analyses of certain human movements.
Cotten, Cameron; Reed, Jennifer L
2013-01-30
Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.
2013-01-01
Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254
Derieppe, Marc; de Senneville, Baudouin Denis; Kuijf, Hugo; Moonen, Chrit; Bos, Clemens
2014-10-01
Previously, we demonstrated the feasibility to monitor ultrasound-mediated uptake of a cell-impermeable model drug in real time with fibered confocal fluorescence microscopy. Here, we present a complete post-processing methodology, which corrects for cell displacements, to improve the accuracy of pharmacokinetic parameter estimation. Nucleus detection was performed based on the radial symmetry transform algorithm. Cell tracking used an iterative closest point approach. Pharmacokinetic parameters were calculated by fitting a two-compartment model to the time-intensity curves of individual cells. Cells were tracked successfully, improving time-intensity curve accuracy and pharmacokinetic parameter estimation. With tracking, 93 % of the 370 nuclei showed a fluorescence signal variation that was well-described by a two-compartment model. In addition, parameter distributions were narrower, thus increasing precision. Dedicated image analysis was implemented and enabled studying ultrasound-mediated model drug uptake kinetics in hundreds of cells per experiment, using fiber-based confocal fluorescence microscopy.
Automatic user customization for improving the performance of a self-paced brain interface system.
Fatourechi, Mehrdad; Bashashati, Ali; Birch, Gary E; Ward, Rabab K
2006-12-01
Customizing the parameter values of brain interface (BI) systems by a human expert has the advantage of being fast and computationally efficient. However, as the number of users and EEG channels grows, this process becomes increasingly time consuming and exhausting. Manual customization also introduces inaccuracies in the estimation of the parameter values. In this paper, the performance of a self-paced BI system whose design parameter values were automatically user customized using a genetic algorithm (GA) is studied. The GA automatically estimates the shapes of movement-related potentials (MRPs), whose features are then extracted to drive the BI. Offline analysis of the data of eight subjects revealed that automatic user customization improved the true positive (TP) rate of the system by an average of 6.68% over that whose customization was carried out by a human expert, i.e., by visually inspecting the MRP templates. On average, the best improvement in the TP rate (an average of 9.82%) was achieved for four individuals with spinal cord injury. In this case, the visual estimation of the parameter values of the MRP templates was very difficult because of the highly noisy nature of the EEG signals. For four able-bodied subjects, for which the MRP templates were less noisy, the automatic user customization led to an average improvement of 3.58% in the TP rate. The results also show that the inter-subject variability of the TP rate is also reduced compared to the case when user customization is carried out by a human expert. These findings provide some primary evidence that automatic user customization leads to beneficial results in the design of a self-paced BI for individuals with spinal cord injury.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
Methaneethorn, Janthima; Panomvana, Duangchit; Vachirayonstien, Thaveechai
2017-09-26
Therapeutic drug monitoring is essential for both phenytoin and phenobarbital therapy given their narrow therapeutic indexes. Nevertheless, the measurement of either phenytoin or phenobarbital concentrations might not be available in some rural hospitals. Information assisting individualized phenytoin and phenobarbital combination therapy is important. This study's objective was to determine the relationship between the maximum rate of metabolism of phenytoin (Vmax) and phenobarbital clearance (CLPB), which can serve as a guide to individualized drug therapy. Data on phenytoin and phenobarbital concentrations of 19 epileptic patients concurrently receiving both drugs were obtained from medical records. Phenytoin and phenobarbital pharmacokinetic parameters were studied at steady-state conditions. The relationship between the elimination parameters of both drugs was determined using simple linear regression. A high correlation coefficient between Vmax and CLPB was found [r=0.744; p<0.001 for Vmax (mg/kg/day) vs. CLPB (L/kg/day)]. Such a relatively strong linear relationship between the elimination parameters of both drugs indicates that Vmax might be predicted from CLPB and vice versa. Regression equations were established for estimating Vmax from CLPB, and vice versa in patients treated with combination of phenytoin and phenobarbital. These proposed equations can be of use in aiding individualized drug therapy.
BASU, ANIRBAN
2014-01-01
SUMMARY This paper builds on the methods of local instrumental variables developed by Heckman and Vytlacil (1999, 2001, 2005) to estimate person-centered treatment (PeT) effects that are conditioned on the person’s observed characteristics and averaged over the potential conditional distribution of unobserved characteristics that lead them to their observed treatment choices. PeT effects are more individualized than conditional treatment effects from a randomized setting with the same observed characteristics. PeT effects can be easily aggregated to construct any of the mean treatment effect parameters and, more importantly, are well suited to comprehend individual-level treatment effect heterogeneity. The paper presents the theory behind PeT effects, and applies it to study the variation in individual-level comparative effects of prostate cancer treatments on overall survival and costs. PMID:25620844
Multimedia data from two probability-based exposure studies were investigated in terms of how missing data and measurement-error imprecision affected estimation of population parameters and associations. Missing data resulted mainly from individuals' refusing to participate in c...
A prototype national cattle evaluation for feed intake and efficiency of Angus cattle
USDA-ARS?s Scientific Manuscript database
Recent development of technologies for measuring individual feed intake has made possible the collection of data suitable for breed-wide genetic evaluation. Goals of this research were to estimate genetic parameters for components of feed efficiency and develop a prototype system for conducting a ge...
Individual tree growth models for natural even-aged shortleaf pine
Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin
2006-01-01
Shortleaf pine (Pinus echinata Mill.) measurements were available from permanent plots established in even-aged stands of the Ouachita Mountains for studying growth. Annual basal area growth was modeled with a least-squares nonlinear regression method utilizing three measurements. The analysis showed that the parameter estimates were in agreement...
Cuff-less PPG based continuous blood pressure monitoring: a smartphone based approach.
Gaurav, Aman; Maheedhar, Maram; Tiwari, Vijay N; Narayanan, Rangavittal
2016-08-01
Cuff-less estimation of systolic (SBP) and diastolic (DBP) blood pressure is an efficient approach for non-invasive and continuous monitoring of an individual's vitals. Although pulse transit time (PTT) based approaches have been successful in estimating the systolic and diastolic blood pressures to a reasonable degree of accuracy, there is still scope for improvement in terms of accuracies. Moreover, PTT approach requires data from sensors placed at two different locations along with individual calibration of physiological parameters for deriving correct estimation of systolic and diastolic blood pressure (BP) and hence is not suitable for smartphone deployment. Heart Rate Variability is one of the extensively used non-invasive parameters to assess cardiovascular autonomic nervous system and is known to be associated with SBP and DBP indirectly. In this work, we propose a novel method to extract a comprehensive set of features by combining PPG signal based and Heart Rate Variability (HRV) related features using a single PPG sensor. Further, these features are fed into a DBP feedback based combinatorial neural network model to arrive at a common weighted average output of DBP and subsequently SBP. Our results show that using this current approach, an accuracy of ±6.8 mmHg for SBP and ±4.7 mmHg for DBP is achievable on 1,750,000 pulses extracted from a public database (comprising 3000 people). Since most of the smartphones are now equipped with PPG sensor, a mobile based cuff-less BP estimation will enable the user to monitor their BP as a vital parameter on demand. This will open new avenues towards development of pervasive and continuous BP monitoring systems leading to an early detection and prevention of cardiovascular diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaut, Arkadiusz; Babak, Stanislav; Krolak, Andrzej
We present data analysis methods used in the detection and estimation of parameters of gravitational-wave signals from the white dwarf binaries in the mock LISA data challenge. Our main focus is on the analysis of challenge 3.1, where the gravitational-wave signals from more than 6x10{sup 7} Galactic binaries were added to the simulated Gaussian instrumental noise. The majority of the signals at low frequencies are not resolved individually. The confusion between the signals is strongly reduced at frequencies above 5 mHz. Our basic data analysis procedure is the maximum likelihood detection method. We filter the data through the template bankmore » at the first step of the search, then we refine parameters using the Nelder-Mead algorithm, we remove the strongest signal found and we repeat the procedure. We detect reliably and estimate parameters accurately of more than ten thousand signals from white dwarf binaries.« less
Cella, Matteo; Bishara, Anthony J.; Medin, Evelina; Swan, Sarah; Reeder, Clare; Wykes, Til
2014-01-01
Objective: Converging research suggests that individuals with schizophrenia show a marked impairment in reinforcement learning, particularly in tasks requiring flexibility and adaptation. The problem has been associated with dopamine reward systems. This study explores, for the first time, the characteristics of this impairment and how it is affected by a behavioral intervention—cognitive remediation. Method: Using computational modelling, 3 reinforcement learning parameters based on the Wisconsin Card Sorting Test (WCST) trial-by-trial performance were estimated: R (reward sensitivity), P (punishment sensitivity), and D (choice consistency). In Study 1 the parameters were compared between a group of individuals with schizophrenia (n = 100) and a healthy control group (n = 50). In Study 2 the effect of cognitive remediation therapy (CRT) on these parameters was assessed in 2 groups of individuals with schizophrenia, one receiving CRT (n = 37) and the other receiving treatment as usual (TAU, n = 34). Results: In Study 1 individuals with schizophrenia showed impairment in the R and P parameters compared with healthy controls. Study 2 demonstrated that sensitivity to negative feedback (P) and reward (R) improved in the CRT group after therapy compared with the TAU group. R and P parameter change correlated with WCST outputs. Improvements in R and P after CRT were associated with working memory gains and reduction of negative symptoms, respectively. Conclusion: Schizophrenia reinforcement learning difficulties negatively influence performance in shift learning tasks. CRT can improve sensitivity to reward and punishment. Identifying parameters that show change may be useful in experimental medicine studies to identify cognitive domains susceptible to improvement. PMID:24214932
NASA Astrophysics Data System (ADS)
Ito, Shin-Ichi; Mitsukura, Yasue; Nakamura Miyamura, Hiroko; Saito, Takafumi; Fukumi, Minoru
EEG is characterized by the unique and individual characteristics. Little research has been done to take into account the individual characteristics when analyzing EEG signals. Often the EEG has frequency components which can describe most of the significant characteristics. Then there is the difference of importance between the analyzed frequency components of the EEG. We think that the importance difference shows the individual characteristics. In this paper, we propose a new EEG extraction method of characteristic vector by a latency structure model in individual characteristics (LSMIC). The LSMIC is the latency structure model, which has personal error as the individual characteristics, based on normal distribution. The real-coded genetic algorithms (RGA) are used for specifying the personal error that is unknown parameter. Moreover we propose an objective estimation method that plots the EEG characteristic vector on a visualization space. Finally, the performance of the proposed method is evaluated using a realistic simulation and applied to a real EEG data. The result of our experiment shows the effectiveness of the proposed method.
Pedigrees or markers: Which are better in estimating relatedness and inbreeding coefficient?
Wang, Jinliang
2016-02-01
Individual inbreeding coefficient (F) and pairwise relatedness (r) are fundamental parameters in population genetics and have important applications in diverse fields such as human medicine, forensics, plant and animal breeding, conservation and evolutionary biology. Traditionally, both parameters are calculated from pedigrees, but are now increasingly estimated from genetic marker data. Conceptually, a pedigree gives the expected F and r values, FP and rP, with the expectations being taken (hypothetically) over an infinite number of individuals with the same pedigree. In contrast, markers give the realised (actual) F and r values at the particular marker loci of the particular individuals, FM and rM. Both pedigree (FP, rP) and marker (FM, rM) estimates can be used as inferences of genomic inbreeding coefficients FG and genomic relatedness rG, which are the underlying quantities relevant to most applications (such as estimating inbreeding depression and heritability) of F and r. In the pre-genomic era, it was widely accepted that pedigrees are much better than markers in delineating FG and rG, and markers should better be used to validate, amend and construct pedigrees rather than to replace them. Is this still true in the genomic era when genome-wide dense SNPs are available? In this simulation study, I showed that genomic markers can yield much better estimates of FG and rG than pedigrees when they are numerous (say, 10(4) SNPs) under realistic situations (e.g. genome and population sizes). Pedigree estimates are especially poor for species with a small genome, where FG and rG are determined to a large extent by Mendelian segregations and may thus deviate substantially from their expectations (FP and rP). Simulations also confirmed that FM, when estimated from many SNPs, can be much more powerful than FP for detecting inbreeding depression in viability. However, I argue that pedigrees cannot be replaced completely by genomic SNPs, because the former allows for the calculation of more complicated IBD coefficients (involving more than 2 individuals, more than one locus, and more than 2 genes at a locus) for which the latter may have reduced capacity or limited power, and because the former has social and other significance for remote relationships which have little genetic significance and cannot be inferred reliably from markers. Copyright © 2015 Elsevier Inc. All rights reserved.
Estimation of accuracy of earth-rotation parameters in different frequency bands
NASA Astrophysics Data System (ADS)
Vondrak, J.
1986-11-01
The accuracies of earth-rotation parameters as determined by five different observational techniques now available (i.e., optical astrometry /OA/, Doppler tracking of satellites /DTS/, satellite laser ranging /SLR/, very long-base interferometry /VLBI/ and lunar laser ranging /LLR/) are estimated. The differences between the individual techniques in all possible combinations, separated by appropriate filters into three frequency bands, were used to estimate the accuracies of the techniques for periods from 0 to 200 days, from 200 to 1000 days and longer than 1000 days. It is shown that for polar motion the most accurate results are obtained with VLBI anad SLR, especially in the short-period region; OA and DTS are less accurate, but with longer periods the differences in accuracy are less pronounced. The accuracies of UTI-UTC as determined by OA, VLBI and LLR are practically equivalent, the differences being less than 40 percent.
NASA Technical Reports Server (NTRS)
Choi, Sung H.; Salem, J. A.; Nemeth, N. N.
1998-01-01
High-temperature slow-crack-growth behaviour of hot-pressed silicon carbide was determined using both constant-stress-rate ("dynamic fatigue") and constant-stress ("static fatigue") testing in flexure at 1300 C in air. Slow crack growth was found to be a governing mechanism associated with failure of the material. Four estimation methods such as the individual data, the Weibull median, the arithmetic mean and the median deviation methods were used to determine the slow crack growth parameters. The four estimation methods were in good agreement for the constant-stress-rate testing with a small variation in the slow-crack-growth parameter, n, ranging from 28 to 36. By contrast, the variation in n between the four estimation methods was significant in the constant-stress testing with a somewhat wide range of n= 16 to 32.
Thurman, Steven M.; Davey, Pinakin Gunvant; McCray, Kaydee Lynn; Paronian, Violeta; Seitz, Aaron R.
2016-01-01
Contrast sensitivity (CS) is widely used as a measure of visual function in both basic research and clinical evaluation. There is conflicting evidence on the extent to which measuring the full contrast sensitivity function (CSF) offers more functionally relevant information than a single measurement from an optotype CS test, such as the Pelli–Robson chart. Here we examine the relationship between functional CSF parameters and other measures of visual function, and establish a framework for predicting individual CSFs with effectively a zero-parameter model that shifts a standard-shaped template CSF horizontally and vertically according to independent measurements of high contrast acuity and letter CS, respectively. This method was evaluated for three different CSF tests: a chart test (CSV-1000), a computerized sine-wave test (M&S Sine Test), and a recently developed adaptive test (quick CSF). Subjects were 43 individuals with healthy vision or impairment too mild to be considered low vision (acuity range of −0.3 to 0.34 logMAR). While each test demands a slightly different normative template, results show that individual subject CSFs can be predicted with roughly the same precision as test–retest repeatability, confirming that individuals predominantly differ in terms of peak CS and peak spatial frequency. In fact, these parameters were sufficiently related to empirical measurements of acuity and letter CS to permit accurate estimation of the entire CSF of any individual with a deterministic model (zero free parameters). These results demonstrate that in many cases, measuring the full CSF may provide little additional information beyond letter acuity and contrast sensitivity. PMID:28006065
Schlunssen, V; Sigsgaard, T; Schaumburg, I; Kromhout, H
2004-01-01
Background: Exposure-response analyses in occupational studies rely on the ability to distinguish workers with regard to exposures of interest. Aims: To evaluate different estimates of current average exposure in an exposure-response analysis on dust exposure and cross-shift decline in FEV1 among woodworkers. Methods: Personal dust samples (n = 2181) as well as data on lung function parameters were available for 1560 woodworkers from 54 furniture industries. The exposure to wood dust for each worker was calculated in eight different ways using individual measurements, group based exposure estimates, a weighted estimate of individual and group based exposure estimates, and predicted values from mixed models. Exposure-response relations on cross-shift changes in FEV1 and exposure estimates were explored. Results: A positive exposure-response relation between average dust exposure and cross-shift FEV1 was shown for non-smokers only and appeared to be most pronounced among pine workers. In general, the highest slope and standard error (SE) was revealed for grouping by a combination of task and factory size, the lowest slope and SE was revealed for estimates based on individual measurements, with the weighted estimate and the predicted values in between. Grouping by quintiles of average exposure for task and factory combinations revealed low slopes and high SE, despite a high contrast. Conclusion: For non-smokers, average dust exposure and cross-shift FEV1 were associated in an exposure dependent manner, especially among pine workers. This study confirms the consequences of using different exposure assessment strategies studying exposure-response relations. It is possible to optimise exposure assessment combining information from individual and group based exposure estimates, for instance by applying predicted values from mixed effects models. PMID:15377768
2015-01-01
The Mass, Metabolism and Length Explanation (MMLE) was advanced in 1984 to explain the relationship between metabolic rate and body mass for birds and mammals. This paper reports on a modernized version of MMLE. MMLE deterministically computes the absolute value of Basal Metabolic Rate (BMR) and body mass for individual animals. MMLE is thus distinct from other examinations of these topics that use species-averaged data to estimate the parameters in a statistically best fit power law relationship such as BMR = a(bodymass)b. Beginning with the proposition that BMR is proportional to the number of mitochondria in an animal, two primary equations are derived that compute BMR and body mass as functions of an individual animal’s characteristic length and sturdiness factor. The characteristic length is a measureable skeletal length associated with an animal’s means of propulsion. The sturdiness factor expresses how sturdy or gracile an animal is. Eight other parameters occur in the equations that vary little among animals in the same phylogenetic group. The present paper modernizes MMLE by explicitly treating Froude and Strouhal dynamic similarity of mammals’ skeletal musculature, revising the treatment of BMR and using new data to estimate numerical values for the parameters that occur in the equations. A mass and length data set with 575 entries from the orders Rodentia, Chiroptera, Artiodactyla, Carnivora, Perissodactyla and Proboscidea is used. A BMR and mass data set with 436 entries from the orders Rodentia, Chiroptera, Artiodactyla and Carnivora is also used. With the estimated parameter values MMLE can calculate characteristic length and sturdiness factor values so that every BMR and mass datum from the BMR and mass data set can be computed exactly. Furthermore MMLE can calculate characteristic length and sturdiness factor values so that every body mass and length datum from the mass and length data set can be computed exactly. Whether or not MMLE can calculate a sturdiness factor value so that an individual animal’s BMR and body mass can be simultaneously computed given its characteristic length awaits analysis of a data set that simultaneously reports all three of these items for individual animals. However for many of the addressed MMLE homogeneous groups, MMLE can predict the exponent obtained by regression analysis of the BMR and mass data using the exponent obtained by regression analysis of the mass and length data. This argues that MMLE may be able to accurately simultaneously compute BMR and mass for an individual animal. PMID:26355655
DeMars, Craig A; Auger-Méthé, Marie; Schlägel, Ulrike E; Boutin, Stan
2013-01-01
Analyses of animal movement data have primarily focused on understanding patterns of space use and the behavioural processes driving them. Here, we analyzed animal movement data to infer components of individual fitness, specifically parturition and neonate survival. We predicted that parturition and neonate loss events could be identified by sudden and marked changes in female movement patterns. Using GPS radio-telemetry data from female woodland caribou (Rangifer tarandus caribou), we developed and tested two novel movement-based methods for inferring parturition and neonate survival. The first method estimated movement thresholds indicative of parturition and neonate loss from population-level data then applied these thresholds in a moving-window analysis on individual time-series data. The second method used an individual-based approach that discriminated among three a priori models representing the movement patterns of non-parturient females, females with surviving offspring, and females losing offspring. The models assumed that step lengths (the distance between successive GPS locations) were exponentially distributed and that abrupt changes in the scale parameter of the exponential distribution were indicative of parturition and offspring loss. Both methods predicted parturition with near certainty (>97% accuracy) and produced appropriate predictions of parturition dates. Prediction of neonate survival was affected by data quality for both methods; however, when using high quality data (i.e., with few missing GPS locations), the individual-based method performed better, predicting neonate survival status with an accuracy rate of 87%. Understanding ungulate population dynamics often requires estimates of parturition and neonate survival rates. With GPS radio-collars increasingly being used in research and management of ungulates, our movement-based methods represent a viable approach for estimating rates of both parameters. PMID:24324866
Demars, Craig A; Auger-Méthé, Marie; Schlägel, Ulrike E; Boutin, Stan
2013-10-01
Analyses of animal movement data have primarily focused on understanding patterns of space use and the behavioural processes driving them. Here, we analyzed animal movement data to infer components of individual fitness, specifically parturition and neonate survival. We predicted that parturition and neonate loss events could be identified by sudden and marked changes in female movement patterns. Using GPS radio-telemetry data from female woodland caribou (Rangifer tarandus caribou), we developed and tested two novel movement-based methods for inferring parturition and neonate survival. The first method estimated movement thresholds indicative of parturition and neonate loss from population-level data then applied these thresholds in a moving-window analysis on individual time-series data. The second method used an individual-based approach that discriminated among three a priori models representing the movement patterns of non-parturient females, females with surviving offspring, and females losing offspring. The models assumed that step lengths (the distance between successive GPS locations) were exponentially distributed and that abrupt changes in the scale parameter of the exponential distribution were indicative of parturition and offspring loss. Both methods predicted parturition with near certainty (>97% accuracy) and produced appropriate predictions of parturition dates. Prediction of neonate survival was affected by data quality for both methods; however, when using high quality data (i.e., with few missing GPS locations), the individual-based method performed better, predicting neonate survival status with an accuracy rate of 87%. Understanding ungulate population dynamics often requires estimates of parturition and neonate survival rates. With GPS radio-collars increasingly being used in research and management of ungulates, our movement-based methods represent a viable approach for estimating rates of both parameters.
Sampling design considerations for demographic studies: a case of colonial seabirds
Kendall, William L.; Converse, Sarah J.; Doherty, Paul F.; Naughton, Maura B.; Anders, Angela; Hines, James E.; Flint, Elizabeth
2009-01-01
For the purposes of making many informed conservation decisions, the main goal for data collection is to assess population status and allow prediction of the consequences of candidate management actions. Reducing the bias and variance of estimates of population parameters reduces uncertainty in population status and projections, thereby reducing the overall uncertainty under which a population manager must make a decision. In capture-recapture studies, imperfect detection of individuals, unobservable life-history states, local movement outside study areas, and tag loss can cause bias or precision problems with estimates of population parameters. Furthermore, excessive disturbance to individuals during capture?recapture sampling may be of concern because disturbance may have demographic consequences. We address these problems using as an example a monitoring program for Black-footed Albatross (Phoebastria nigripes) and Laysan Albatross (Phoebastria immutabilis) nesting populations in the northwestern Hawaiian Islands. To mitigate these estimation problems, we describe a synergistic combination of sampling design and modeling approaches. Solutions include multiple capture periods per season and multistate, robust design statistical models, dead recoveries and incidental observations, telemetry and data loggers, buffer areas around study plots to neutralize the effect of local movements outside study plots, and double banding and statistical models that account for band loss. We also present a variation on the robust capture?recapture design and a corresponding statistical model that minimizes disturbance to individuals. For the albatross case study, this less invasive robust design was more time efficient and, when used in combination with a traditional robust design, reduced the standard error of detection probability by 14% with only two hours of additional effort in the field. These field techniques and associated modeling approaches are applicable to studies of most taxa being marked and in some cases have individually been applied to studies of birds, fish, herpetofauna, and mammals.
Predicting the size of individual and group differences on speeded cognitive tasks.
Chen, Jing; Hale, Sandra; Myerson, Joel
2007-06-01
An a priori test of the difference engine model (Myerson, Hale, Zheng, Jenkins, & Widaman, 2003) was conducted using a large, diverse sample of individuals who performed three speeded verbal tasks and three speeded visuospatial tasks. Results demonstrated that, as predicted by the model, the group standard deviation (SD) on any task was proportional to the amount of processing required by that task. Both individual performances as well as those of fast and slow subgroups could be accurately predicted by the model using no free parameters, just an individual or subgroup's mean z-score and the values of theoretical constructs estimated from fits to the group SDs. Taken together, these results are consistent with post hoc analyses reported by Myerson et al. and provide even stronger supporting evidence. In particular, the ability to make quantitative predictions without using any free parameters provides the clearest demonstration to date of the power of an analytic approach on the basis of the difference engine.
Likelihoods for fixed rank nomination networks
HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE
2014-01-01
Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586
Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh
2015-01-15
Highlights: • Fractionation of solid wastes into readily and slowly biodegradable fractions. • Kinetic coefficients estimation from mono-digestion batch assays. • Validation of kinetic coefficients with a co-digestion continuous experiment. • Simulation of batch and continuous experiments with an ADM1-based model. - Abstract: A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowlymore » biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 g VS/L d. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes.« less
Accurate age estimation in small-scale societies
Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Migliano, Andrea Bamberg; Thomas, Mark G.
2017-01-01
Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire. PMID:28696282
Accurate age estimation in small-scale societies.
Diekmann, Yoan; Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Page, Abigail E; Chaudhary, Nikhil; Migliano, Andrea Bamberg; Thomas, Mark G
2017-08-01
Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire.
Distinctive Correspondence Between Separable Visual Attention Functions and Intrinsic Brain Networks
Ruiz-Rizzo, Adriana L.; Neitzel, Julia; Müller, Hermann J.; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's “theory of visual attention” (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity. PMID:29662444
Ruiz-Rizzo, Adriana L; Neitzel, Julia; Müller, Hermann J; Sorg, Christian; Finke, Kathrin
2018-01-01
Separable visual attention functions are assumed to rely on distinct but interacting neural mechanisms. Bundesen's "theory of visual attention" (TVA) allows the mathematical estimation of independent parameters that characterize individuals' visual attentional capacity (i.e., visual processing speed and visual short-term memory storage capacity) and selectivity functions (i.e., top-down control and spatial laterality). However, it is unclear whether these parameters distinctively map onto different brain networks obtained from intrinsic functional connectivity, which organizes slowly fluctuating ongoing brain activity. In our study, 31 demographically homogeneous healthy young participants performed whole- and partial-report tasks and underwent resting-state functional magnetic resonance imaging (rs-fMRI). Report accuracy was modeled using TVA to estimate, individually, the four TVA parameters. Networks encompassing cortical areas relevant for visual attention were derived from independent component analysis of rs-fMRI data: visual, executive control, right and left frontoparietal, and ventral and dorsal attention networks. Two TVA parameters were mapped on particular functional networks. First, participants with higher (vs. lower) visual processing speed showed lower functional connectivity within the ventral attention network. Second, participants with more (vs. less) efficient top-down control showed higher functional connectivity within the dorsal attention network and lower functional connectivity within the visual network. Additionally, higher performance was associated with higher functional connectivity between networks: specifically, between the ventral attention and right frontoparietal networks for visual processing speed, and between the visual and executive control networks for top-down control. The higher inter-network functional connectivity was related to lower intra-network connectivity. These results demonstrate that separable visual attention parameters that are assumed to constitute relatively stable traits correspond distinctly to the functional connectivity both within and between particular functional networks. This implies that individual differences in basic attention functions are represented by differences in the coherence of slowly fluctuating brain activity.
Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals
Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew
2011-01-01
Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.
NASA Astrophysics Data System (ADS)
Anderson, Christian Carl
This Dissertation explores the physics underlying the propagation of ultrasonic waves in bone and in heart tissue through the use of Bayesian probability theory. Quantitative ultrasound is a noninvasive modality used for clinical detection, characterization, and evaluation of bone quality and cardiovascular disease. Approaches that extend the state of knowledge of the physics underpinning the interaction of ultrasound with inherently inhomogeneous and isotropic tissue have the potential to enhance its clinical utility. Simulations of fast and slow compressional wave propagation in cancellous bone were carried out to demonstrate the plausibility of a proposed explanation for the widely reported anomalous negative dispersion in cancellous bone. The results showed that negative dispersion could arise from analysis that proceeded under the assumption that the data consist of only a single ultrasonic wave, when in fact two overlapping and interfering waves are present. The confounding effect of overlapping fast and slow waves was addressed by applying Bayesian parameter estimation to simulated data, to experimental data acquired on bone-mimicking phantoms, and to data acquired in vitro on cancellous bone. The Bayesian approach successfully estimated the properties of the individual fast and slow waves even when they strongly overlapped in the acquired data. The Bayesian parameter estimation technique was further applied to an investigation of the anisotropy of ultrasonic properties in cancellous bone. The degree to which fast and slow waves overlap is partially determined by the angle of insonation of ultrasound relative to the predominant direction of trabecular orientation. In the past, studies of anisotropy have been limited by interference between fast and slow waves over a portion of the range of insonation angles. Bayesian analysis estimated attenuation, velocity, and amplitude parameters over the entire range of insonation angles, allowing a more complete characterization of anisotropy. A novel piecewise linear model for the cyclic variation of ultrasonic backscatter from myocardium was proposed. Models of cyclic variation for 100 type 2 diabetes patients and 43 normal control subjects were constructed using Bayesian parameter estimation. Parameters determined from the model, specifically rise time and slew rate, were found to be more reliable in differentiating between subject groups than the previously employed magnitude parameter.
Fiero, Mallorie H; Hsu, Chiu-Hsieh; Bell, Melanie L
2017-11-20
We extend the pattern-mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern-mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. Copyright © 2017 John Wiley & Sons, Ltd.
Estimation of stature by using lower limb dimensions in the Malaysian population.
Nor, Faridah Mohd; Abdullah, Nurliza; Mustapa, Al-Mizan; Qi Wen, Leong; Faisal, Nurulina Aimi; Ahmad Nazari, Dayang Anis Asyikin
2013-11-01
Estimation of stature is an important step in developing a biological profile for human identification. It may provide a valuable indicator for an unknown individual in a population. The aim of this study was to analyse the relationship between stature and lower limb dimensions in the Malaysian population. The sample comprised 100 corpses, which included 69 males and 31 females between the age range of 20-90 years old. The parameters measured were stature, thigh length, lower leg length, leg length, foot length, foot height and foot breadth. Results showed that the mean values in males were significantly higher than those in females (p < 0.05). There were significant correlations between lower limb dimensions and stature. Cross-validation of the equation on 100 individuals showed close approximation between known stature and estimated stature. It was concluded that lower limb dimensions were useful for estimation of stature, which should be validated in future studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Hagar, Yolanda C; Harvey, Danielle J; Beckett, Laurel A
2016-08-30
We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left-censored and right-censored, and some individuals are never screened (the 'cured' population). We propose a multivariate parametric cure model that can be used with left-censored and right-censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within-subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER-Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Bradford, Amanda L.; Forney, Karin A.; Oleson, Erin M.; Barlow, Jay
2014-01-01
For biological populations that form aggregations (or clusters) of individuals, cluster size is an important parameter in line-transect abundance estimation and should be accurately measured. Cluster size in cetaceans has traditionally been represented as the total number of individuals in a group, but group size may be underestimated if group members are spatially diffuse. Groups of false killer whales (Pseudorca crassidens) can comprise numerous subgroups that are dispersed over tens of kilometers, leading to a spatial mismatch between a detected group and the theoretical framework of line-transect analysis. Three stocks of false killer whales are found within the U.S. Exclusive Economic Zone of the Hawaiian Islands (Hawaiian EEZ): an insular main Hawaiian Islands stock, a pelagic stock, and a Northwestern Hawaiian Islands (NWHI) stock. A ship-based line-transect survey of the Hawaiian EEZ was conducted in the summer and fall of 2010, resulting in six systematic-effort visual sightings of pelagic (n = 5) and NWHI (n = 1) false killer whale groups. The maximum number and spatial extent of subgroups per sighting was 18 subgroups and 35 km, respectively. These sightings were combined with data from similar previous surveys and analyzed within the conventional line-transect estimation framework. The detection function, mean cluster size, and encounter rate were estimated separately to appropriately incorporate data collected using different methods. Unlike previous line-transect analyses of cetaceans, subgroups were treated as the analytical cluster instead of groups because subgroups better conform to the specifications of line-transect theory. Bootstrap values (n = 5,000) of the line-transect parameters were randomly combined to estimate the variance of stock-specific abundance estimates. Hawai’i pelagic and NWHI false killer whales were estimated to number 1,552 (CV = 0.66; 95% CI = 479–5,030) and 552 (CV = 1.09; 95% CI = 97–3,123) individuals, respectively. Subgroup structure is an important factor to consider in line-transect analyses of false killer whales and other species with complex grouping patterns. PMID:24587372
Bradford, Amanda L; Forney, Karin A; Oleson, Erin M; Barlow, Jay
2014-01-01
For biological populations that form aggregations (or clusters) of individuals, cluster size is an important parameter in line-transect abundance estimation and should be accurately measured. Cluster size in cetaceans has traditionally been represented as the total number of individuals in a group, but group size may be underestimated if group members are spatially diffuse. Groups of false killer whales (Pseudorca crassidens) can comprise numerous subgroups that are dispersed over tens of kilometers, leading to a spatial mismatch between a detected group and the theoretical framework of line-transect analysis. Three stocks of false killer whales are found within the U.S. Exclusive Economic Zone of the Hawaiian Islands (Hawaiian EEZ): an insular main Hawaiian Islands stock, a pelagic stock, and a Northwestern Hawaiian Islands (NWHI) stock. A ship-based line-transect survey of the Hawaiian EEZ was conducted in the summer and fall of 2010, resulting in six systematic-effort visual sightings of pelagic (n = 5) and NWHI (n = 1) false killer whale groups. The maximum number and spatial extent of subgroups per sighting was 18 subgroups and 35 km, respectively. These sightings were combined with data from similar previous surveys and analyzed within the conventional line-transect estimation framework. The detection function, mean cluster size, and encounter rate were estimated separately to appropriately incorporate data collected using different methods. Unlike previous line-transect analyses of cetaceans, subgroups were treated as the analytical cluster instead of groups because subgroups better conform to the specifications of line-transect theory. Bootstrap values (n = 5,000) of the line-transect parameters were randomly combined to estimate the variance of stock-specific abundance estimates. Hawai'i pelagic and NWHI false killer whales were estimated to number 1,552 (CV = 0.66; 95% CI = 479-5,030) and 552 (CV = 1.09; 95% CI = 97-3,123) individuals, respectively. Subgroup structure is an important factor to consider in line-transect analyses of false killer whales and other species with complex grouping patterns.
Survival of European mouflon (Artiodactyla: Bovidae) in Hawai'i based on tooth cementum lines
Hess, S.C.; Stephens, R.M.; Thompson, T.L.; Danner, R.M.; Kawakami, B.
2011-01-01
Reliable techniques for estimating age of ungulates are necessary to determine population parameters such as age structure and survival. Techniques that rely on dentition, horn, and facial patterns have limited utility for European mouflon sheep (Ovis gmelini musimon), but tooth cementum lines may offer a useful alternative. Cementum lines may not be reliable outside temperate regions, however, because lack of seasonality in diet may affect annulus formation. We evaluated the utility of tooth cementum lines for estimating age of mouflon in Hawai'i in comparison to dentition. Cementum lines were present in mouflon from Mauna Loa, island of Hawai'i, but were less distinct than in North American sheep. The two age-estimation methods provided similar estimates for individuals aged ???3 yr by dentition (the maximum age estimable by dentition), with exact matches in 51% (18/35) of individuals, and an average difference of 0.8 yr (range 04). Estimates of age from cementum lines were higher than those from dentition in 40% (14/35) and lower in 9% (3/35) of individuals. Discrepancies in age estimates between techniques and between paired tooth samples estimated by cementum lines were related to certainty categories assigned by the clarity of cementum lines, reinforcing the importance of collecting a sufficient number of samples to compensate for samples of lower quality, which in our experience, comprised approximately 22% of teeth. Cementum lines appear to provide relatively accurate age estimates for mouflon in Hawai'i, allow estimating age beyond 3 yr, and they offer more precise estimates than tooth eruption patterns. After constructing an age distribution, we estimated annual survival with a log-linear model to be 0.596 (95% CI 0.5540.642) for this heavily controlled population. ?? 2011 by University of Hawai'i Press.
Bouchez, A; Goffinet, B
1990-02-01
Selection indices can be used to predict one trait from information available on several traits in order to improve the prediction accuracy. Plant or animal breeders are interested in selecting only the best individuals, and need to compare the efficiency of different trait combinations in order to choose the index ensuring the best prediction quality for individual values. As the usual tools for index evaluation do not remain unbiased in all cases, we propose a robust way of evaluation by means of an estimator of the mean-square error of prediction (EMSEP). This estimator remains valid even when parameters are not known, as usually assumed, but are estimated. EMSEP is applied to the choice of an indirect multitrait selection index at the F5 generation of a classical breeding scheme for soybeans. Best predictions for precocity are obtained by means of indices using only part of the available information.
Challenges of Developing Design Discharge Estimates with Uncertain Data and Information
NASA Astrophysics Data System (ADS)
Senarath, S. U. S.
2016-12-01
This study focuses on design discharge estimates obtained for gauged basins through flood flow frequency analysis. Bulletin 17B (B17B) guidelines are widely used in the USA for developing these design estimates, which are required for many water resources engineering design applications. A set of outlier and historical data, and distribution parameter selection options is included in these guidelines. These options are provided in the guidelines as a means of accounting for uncertain data and information, primarily in the flow record. The individual as well as the cumulative effects of each of these preferences on design discharge estimates are evaluated in this study by using data from several gauges that are part of the United States Geological Survey's Hydro-Climatic Data Network. The results of this study show that despite the availability of rigorous and detailed guidelines for flood frequency analysis, the design discharge estimates can still vary substantially, from user to user, based on data and model parameter selection options chosen by each user. Thus, the findings of this study have strong implications for water resources engineers and other professionals who use B17B-based design discharge estimates in their work.
Measuring Biomass and Carbon Stock in Resprouting Woody Plants
Matula, Radim; Damborská, Lenka; Nečasová, Monika; Geršl, Milan; Šrámek, Martin
2015-01-01
Resprouting multi-stemmed woody plants form an important component of the woody vegetation in many ecosystems, but a clear methodology for reliable measurement of their size and quick, non-destructive estimation of their woody biomass and carbon stock is lacking. Our goal was to find a minimum number of sprouts, i.e., the most easily obtainable, and sprout parameters that should be measured for accurate sprout biomass and carbon stock estimates. Using data for 5 common temperate woody species, we modelled carbon stock and sprout biomass as a function of an increasing number of sprouts in an interaction with different sprout parameters. The mean basal diameter of only two to five of the thickest sprouts and the basal diameter and DBH of the thickest sprouts per stump proved to be accurate estimators for the total sprout biomass of the individual resprouters and the populations of resprouters, respectively. Carbon stock estimates were strongly correlated with biomass estimates, but relative carbon content varied among species. Our study demonstrated that the size of the resprouters can be easily measured, and their biomass and carbon stock estimated; therefore, resprouters can be simply incorporated into studies of woody vegetation. PMID:25719601
Defining Uncertainty and Error in Planktic Foraminiferal Oxygen Isotope Measurements
NASA Astrophysics Data System (ADS)
Fraass, A. J.; Lowery, C.
2016-12-01
Foraminifera are the backbone of paleoceanography, and planktic foraminifera are one of the leading tools for reconstructing water column structure. Currently, there are unconstrained variables when dealing with the reproducibility of oxygen isotope measurements. This study presents the first results from a simple model of foraminiferal calcification (Foraminiferal Isotope Reproducibility Model; FIRM), designed to estimate the precision and accuracy of oxygen isotope measurements. FIRM produces synthetic isotope data using parameters including location, depth habitat, season, number of individuals included in measurement, diagenesis, misidentification, size variation, and vital effects. Reproducibility is then tested using Monte Carlo simulations. The results from a series of experiments show that reproducibility is largely controlled by the number of individuals in each measurement, but also strongly a function of local oceanography if the number of individuals is held constant. Parameters like diagenesis or misidentification have an impact on both the precision and the accuracy of the data. Currently FIRM is a tool to estimate isotopic error values best employed in the Holocene. It is also a tool to explore the impact of myriad factors on the fidelity of paleoceanographic records. FIRM was constructed in the open-source computing environment R and is freely available via GitHub. We invite modification and expansion, and have planned inclusions for benthic foram reproducibility and stratigraphic uncertainty.
Attitudes to Gun Control in an American Twin Sample: Sex Differences in the Causes of Variation.
Eaves, Lindon J; Silberg, Judy L
2017-10-01
The genetic and social causes of individual differences in attitudes to gun control are estimated in a sample of senior male and female twin pairs in the United States. Genetic and environmental parameters were estimated by weighted least squares applied to polychoric correlations for monozygotic (MZ) and dizygotic (DZ) twins of both sexes. The analysis suggests twin similarity for attitudes to gun control in men is entirely genetic while that in women is purely social. Although the volunteer sample is small, the analysis illustrates how the well-tested concepts and methods of genetic epidemiology may be a fertile resource for deepening our scientific understanding of biological and social pathways that affect individual risk to gun violence.
Hewitt, David A.; Janney, Eric C.; Hayes, Brian S.; Harris, Alta C.
2015-10-02
Despite relatively high survival in most years, we conclude that both species have experienced substantial decreases in the abundance of spawning adults because losses from mortality have not been balanced by recruitment of new individuals. Although capture-recapture data indicate substantial recruitment of new individuals into the spawning populations for SNS and river spawning LRS in some years, size data do not corroborate these estimates. As a result, the status of the endangered sucker populations in Upper Klamath Lake remains worrisome, especially for shortnose suckers. Our monitoring program provides a robust platform for estimating vital population parameters, evaluating the status of the populations, and assessing the effectiveness of conservation and recovery efforts.
Optimal sampling for radiotelemetry studies of spotted owl habitat and home range.
Andrew B. Carey; Scott P. Horton; Janice A. Reid
1989-01-01
Radiotelemetry studies of spotted owl (Strix occidentalis) ranges and habitat-use must be designed efficiently to estimate parameters needed for a sample of individuals sufficient to describe the population. Independent data are required by analytical methods and provide the greatest return of information per effort. We examined time series of...
NASA Technical Reports Server (NTRS)
Boorstyn, R. R.
1973-01-01
Research is reported dealing with problems of digital data transmission and computer communications networks. The results of four individual studies are presented which include: (1) signal processing with finite state machines, (2) signal parameter estimation from discrete-time observations, (3) digital filtering for radar signal processing applications, and (4) multiple server queues where all servers are not identical.
Generating multi-scale albedo look-up maps using MODIS BRDF/Albedo products and landsat imagery
USDA-ARS?s Scientific Manuscript database
Surface albedo determines radiative forcing and is a key parameter for driving Earth’s climate. Better characterization of surface albedo for individual land cover types can reduce the uncertainty in estimating changes to Earth’s radiation balance due to land cover change. This paper presents a mult...
Si, Jiwei; Li, Hongxia; Sun, Yan; Xu, Yanli; Sun, Yu
2016-01-01
The present study used the choice/no-choice method to investigate the effect of math anxiety on the strategy used in computational estimation and mental arithmetic tasks and to examine age-related differences in this regard. Fifty-seven fourth graders, 56 sixth graders, and 60 adults were randomly selected to participate in the experiment. Results showed the following: (1) High-anxious individuals were more likely to use a rounding-down strategy in the computational estimation task under the best-choice condition. Additionally, sixth-grade students and adults performed faster than fourth-grade students on the strategy execution parameter. Math anxiety affected response times (RTs) and the accuracy with which strategies were executed. (2) The execution of the partial-decomposition strategy was superior to that of the full-decomposition strategy on the mental arithmetic task. Low-math-anxious persons provided more accurate answers than did high-math-anxious participants under the no-choice condition. This difference was significant for sixth graders. With regard to the strategy selection parameter, the RTs for strategy selection varied with age.
EGSIEM combination service: combination of GRACE monthly K-band solutions on normal equation level
NASA Astrophysics Data System (ADS)
Meyer, Ulrich; Jean, Yoomin; Arnold, Daniel; Jäggi, Adrian
2017-04-01
The European Gravity Service for Improved Emergency Management (EGSIEM) project offers a scientific combination service, combining for the first time monthly GRACE gravity fields of different analysis centers (ACs) on normal equation (NEQ) level and thus taking all correlations between the gravity field coefficients and pre-eliminated orbit and instrument parameters correctly into account. Optimal weights for the individual NEQs are commonly derived by variance component estimation (VCE), as is the case for the products of the International VLBI Service (IVS) or the DTRF2008 reference frame realisation that are also derived by combination on NEQ-level. But variance factors are based on post-fit residuals and strongly depend on observation sampling and noise modeling, which both are very diverse in case of the individual EGSIEM ACs. These variance factors do not necessarily represent the true error levels of the estimated gravity field parameters that are still governed by analysis noise. We present a combination approach where weights are derived on solution level, thereby taking the analysis noise into account.
A mathematical model of Staphylococcus aureus control in dairy herds.
Zadoks, R. N.; Allore, H. G.; Hagenaars, T. J.; Barkema, H. W.; Schukken, Y. H.
2002-01-01
An ordinary differential equation model was developed to simulate dynamics of Staphylococcus aureus mastitis. Data to estimate model parameters were obtained from an 18-month observational study in three commercial dairy herds. A deterministic simulation model was constructed to estimate values of the basic (R0) and effective (Rt) reproductive number in each herd, and to examine the effect of management on mastitis control. In all herds R0 was below the threshold value 1, indicating control of contagious transmission. Rt was higher than R0 because recovered individuals were more susceptible to infection than individuals without prior infection history. Disease dynamics in two herds were well described by the model. Treatment of subclinical mastitis and prevention of influx of infected individuals contributed to decrease of S. aureus prevalence. For one herd, the model failed to mimic field observations. Explanations for the discrepancy are given in a discussion of current knowledge and model assumptions. PMID:12403116
MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.
Hedeker, D; Gibbons, R D
1996-05-01
MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.
Improved arrival-date estimates of Arctic-breeding Dunlin (Calidris alpina arcticola)
Doll, Andrew C.; Lanctot, Richard B.; Stricker, Craig A.; Yezerinac, Stephen M.; Wunder, Michael B.
2015-01-01
The use of stable isotopes in animal ecology depends on accurate descriptions of isotope dynamics within individuals. The prevailing assumption that laboratory-derived isotopic parameters apply to free-living animals is largely untested. We used stable carbon isotopes (δ13C) in whole blood from migratory Dunlin (Calidris alpina arcticola) to estimate an in situ turnover rate and individual diet-switch dates. Our in situ results indicated that turnover rates were higher in free-living birds, in comparison to the results of an experimental study on captive Dunlin and estimates derived from a theoretical allometric model. Diet-switch dates from all 3 methods were then used to estimate arrival dates to the Arctic; arrival dates calculated with the in situ turnover rate were later than those with the other turnover-rate estimates, substantially so in some cases. These later arrival dates matched dates when local snow conditions would have allowed Dunlin to settle, and agreed with anticipated arrival dates of Dunlin tracked with light-level geolocators. Our study presents a novel method for accurately estimating arrival dates for individuals of migratory species in which return dates are difficult to document. This may be particularly appropriate for species in which extrinsic tracking devices cannot easily be employed because of cost, body size, or behavioral constraints, and in habitats that do not allow individuals to be detected easily upon first arrival. Thus, this isotopic method offers an exciting alternative approach to better understand how species may be altering their arrival dates in response to changing climatic conditions.
Hormuth, David A; Skinner, Jack T; Does, Mark D; Yankeelov, Thomas E
2014-05-01
Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) can quantitatively and qualitatively assess physiological characteristics of tissue. Quantitative DCE-MRI requires an estimate of the time rate of change of the concentration of the contrast agent in the blood plasma, the vascular input function (VIF). Measuring the VIF in small animals is notoriously difficult as it requires high temporal resolution images limiting the achievable number of slices, field-of-view, spatial resolution, and signal-to-noise. Alternatively, a population-averaged VIF could be used to mitigate the acquisition demands in studies aimed to investigate, for example, tumor vascular characteristics. Thus, the overall goal of this manuscript is to determine how the kinetic parameters estimated by a population based VIF differ from those estimated by an individual VIF. Eight rats bearing gliomas were imaged before, during, and after an injection of Gd-DTPA. K(trans), ve, and vp were extracted from signal-time curves of tumor tissue using both individual and population-averaged VIFs. Extended model voxel estimates of K(trans) and ve in all animals had concordance correlation coefficients (CCC) ranging from 0.69 to 0.98 and Pearson correlation coefficients (PCC) ranging from 0.70 to 0.99. Additionally, standard model estimates resulted in CCCs ranging from 0.81 to 0.99 and PCCs ranging from 0.98 to 1.00, supporting the use of a population based VIF if an individual VIF is not available. Copyright © 2014 Elsevier Inc. All rights reserved.
IVS Tropospheric Parameters: Comparison with DORIS and GPS for CONT02
NASA Technical Reports Server (NTRS)
Schuh, Harald; Snajdrova, Kristyna; Boehm, Johannes; Willis, Pascal; Engelhardt, Gerald; Lanotte, Roberto; Tomasi, Paolo; Negusini, Monia; MacMillan, Daniel; Vereshchagina, Iraida
2004-01-01
In April 2002 the IVS (International VLBI Service for Geodesy and Astrometry) set up the Pilot Project - Tropospheric Parameters, and the Institute of Geodesy and Geophysics (IGG), Vienna, was put in charge of coordinating the project. Seven IVS Analysis Centers have joined the project and regularly submitted their estimates of tropospheric parameters (wet and total zenith delays, horizontal gradients) for all IVS-R1 mid IVS-R4 sessions since January 1st, 2002. The individual submissions are combined by a two-step procedure to obtain stable, robust and highly accurate tropospheric parameter time series with one hour resolution (internal accuracy: 2-4 ram). Starting with July 2003, the combined tropospheric estimates became operational IVS products. In the second half of October 2002 the VLBI campaign CONT02 was observed with 8 stations participating around the globe. At four of them (Gilmore Creek, U.S.A.; Hartebeesthoek, South Africa; Kokee Park, U.S.A.; Ny-Alesund, Norway) also total zenith delays from DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) are available and these estimates are compared with those from the IGS (International GPS Service) and the IVS. The distance from the DORIS beacons to the co-located GPS and VLBI stations is around 2 km or less for the four sites mentioned above.
Tornøe, Christoffer W; Overgaard, Rune V; Agersø, Henrik; Nielsen, Henrik A; Madsen, Henrik; Jonsson, E Niclas
2005-08-01
The objective of the present analysis was to explore the use of stochastic differential equations (SDEs) in population pharmacokinetic/pharmacodynamic (PK/PD) modeling. The intra-individual variability in nonlinear mixed-effects models based on SDEs is decomposed into two types of noise: a measurement and a system noise term. The measurement noise represents uncorrelated error due to, for example, assay error while the system noise accounts for structural misspecifications, approximations of the dynamical model, and true random physiological fluctuations. Since the system noise accounts for model misspecifications, the SDEs provide a diagnostic tool for model appropriateness. The focus of the article is on the implementation of the Extended Kalman Filter (EKF) in NONMEM for parameter estimation in SDE models. Various applications of SDEs in population PK/PD modeling are illustrated through a systematic model development example using clinical PK data of the gonadotropin releasing hormone (GnRH) antagonist degarelix. The dynamic noise estimates were used to track variations in model parameters and systematically build an absorption model for subcutaneously administered degarelix. The EKF-based algorithm was successfully implemented in NONMEM for parameter estimation in population PK/PD models described by systems of SDEs. The example indicated that it was possible to pinpoint structural model deficiencies, and that valuable information may be obtained by tracking unexplained variations in parameters.
Hogan, Thomas J
2012-05-01
The objective was to review recent economic evaluations of influenza vaccination by injection in the US, assess their evidence, and conclude on their collective findings. The literature was searched for economic evaluations of influenza vaccination injection in healthy working adults in the US published since 1995. Ten evaluations described in nine papers were identified. These were synopsized and their results evaluated, the basic structure of all evaluations was ascertained, and sensitivity of outcomes to changes in parameter values were explored using a decision model. Areas to improve economic evaluations were noted. Eight of nine evaluations with credible economic outcomes were favourable to vaccination, representing a statistically significant result compared with a proportion of 50% that would be expected if vaccination and no vaccination were economically equivalent. Evaluations shared a basic structure, but differed considerably with respect to cost components, assumptions, methods, and parameter estimates. Sensitivity analysis indicated that changes in parameter values within the feasible range, individually or simultaneously, could reverse economic outcomes. Given stated misgivings, the methods of estimating influenza reduction ascribed to vaccination must be researched to confirm that they produce accurate and reliable estimates. Research is also needed to improve estimates of the costs per case of influenza illness and the costs of vaccination. Based on their assumptions, the reviewed papers collectively appear to support the economic benefits of influenza vaccination of healthy adults. Yet the underlying assumptions, methods and parameter estimates themselves warrant further research to confirm they are accurate, reliable and appropriate to economic evaluation purposes.
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
Refusal bias in HIV prevalence estimates from nationally representative seroprevalence surveys.
Reniers, Georges; Eaton, Jeffrey
2009-03-13
To assess the relationship between prior knowledge of one's HIV status and the likelihood to refuse HIV testing in populations-based surveys and explore its potential for producing bias in HIV prevalence estimates. Using longitudinal survey data from Malawi, we estimate the relationship between prior knowledge of HIV-positive status and subsequent refusal of an HIV test. We use that parameter to develop a heuristic model of refusal bias that is applied to six Demographic and Health Surveys, in which refusal by HIV status is not observed. The model only adjusts for refusal bias conditional on a completed interview. Ecologically, HIV prevalence, prior testing rates and refusal for HIV testing are highly correlated. Malawian data further suggest that amongst individuals who know their status, HIV-positive individuals are 4.62 (95% confidence interval, 2.60-8.21) times more likely to refuse testing than HIV-negative ones. On the basis of that parameter and other inputs from the Demographic and Health Surveys, our model predicts downward bias in national HIV prevalence estimates ranging from 1.5% (95% confidence interval, 0.7-2.9) for Senegal to 13.3% (95% confidence interval, 7.2-19.6) for Malawi. In absolute terms, bias in HIV prevalence estimates is negligible for Senegal but 1.6 (95% confidence interval, 0.8-2.3) percentage points for Malawi. Downward bias is more severe in urban populations. Because refusal rates are higher in men, seroprevalence surveys also tend to overestimate the female-to-male ratio of infections. Prior knowledge of HIV status informs decisions to participate in seroprevalence surveys. Informed refusals may produce bias in estimates of HIV prevalence and the sex ratio of infections.
Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger
NASA Astrophysics Data System (ADS)
Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.
2016-12-01
Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.
Methods for estimating dispersal probabilities and related parameters using marked animals
Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.; Clobert, Jean; Danchin, Etienne; Dhondt, Andre A.; Nichols, James D.
2001-01-01
Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.
A novel approach for estimating ingested dose associated with paracetamol overdose
Zurlinden, Todd J.; Heard, Kennon
2015-01-01
Aim In cases of paracetamol (acetaminophen, APAP) overdose, an accurate estimate of tissue‐specific paracetamol pharmacokinetics (PK) and ingested dose can offer health care providers important information for the individualized treatment and follow‐up of affected patients. Here a novel methodology is presented to make such estimates using a standard serum paracetamol measurement and a computational framework. Methods The core component of the computational framework was a physiologically‐based pharmacokinetic (PBPK) model developed and evaluated using an extensive set of human PK data. Bayesian inference was used for parameter and dose estimation, allowing the incorporation of inter‐study variability, and facilitating the calculation of uncertainty in model outputs. Results Simulations of paracetamol time course concentrations in the blood were in close agreement with experimental data under a wide range of dosing conditions. Also, predictions of administered dose showed good agreement with a large collection of clinical and emergency setting PK data over a broad dose range. In addition to dose estimation, the platform was applied for the determination of optimal blood sampling times for dose reconstruction and quantitation of the potential role of paracetamol conjugate measurement on dose estimation. Conclusions Current therapies for paracetamol overdose rely on a generic methodology involving the use of a clinical nomogram. By using the computational framework developed in this study, serum sample data, and the individual patient's anthropometric and physiological information, personalized serum and liver pharmacokinetic profiles and dose estimate could be generated to help inform an individualized overdose treatment and follow‐up plan. PMID:26441245
A novel approach for estimating ingested dose associated with paracetamol overdose.
Zurlinden, Todd J; Heard, Kennon; Reisfeld, Brad
2016-04-01
In cases of paracetamol (acetaminophen, APAP) overdose, an accurate estimate of tissue-specific paracetamol pharmacokinetics (PK) and ingested dose can offer health care providers important information for the individualized treatment and follow-up of affected patients. Here a novel methodology is presented to make such estimates using a standard serum paracetamol measurement and a computational framework. The core component of the computational framework was a physiologically-based pharmacokinetic (PBPK) model developed and evaluated using an extensive set of human PK data. Bayesian inference was used for parameter and dose estimation, allowing the incorporation of inter-study variability, and facilitating the calculation of uncertainty in model outputs. Simulations of paracetamol time course concentrations in the blood were in close agreement with experimental data under a wide range of dosing conditions. Also, predictions of administered dose showed good agreement with a large collection of clinical and emergency setting PK data over a broad dose range. In addition to dose estimation, the platform was applied for the determination of optimal blood sampling times for dose reconstruction and quantitation of the potential role of paracetamol conjugate measurement on dose estimation. Current therapies for paracetamol overdose rely on a generic methodology involving the use of a clinical nomogram. By using the computational framework developed in this study, serum sample data, and the individual patient's anthropometric and physiological information, personalized serum and liver pharmacokinetic profiles and dose estimate could be generated to help inform an individualized overdose treatment and follow-up plan. © 2015 The British Pharmacological Society.
Evaluation of incremental reactivity and its uncertainty in Southern California.
Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G
2003-04-15
The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.
Jongerling, Joran; Laurenceau, Jean-Philippe; Hamaker, Ellen L
2015-01-01
In this article we consider a multilevel first-order autoregressive [AR(1)] model with random intercepts, random autoregression, and random innovation variance (i.e., the level 1 residual variance). Including random innovation variance is an important extension of the multilevel AR(1) model for two reasons. First, between-person differences in innovation variance are important from a substantive point of view, in that they capture differences in sensitivity and/or exposure to unmeasured internal and external factors that influence the process. Second, using simulation methods we show that modeling the innovation variance as fixed across individuals, when it should be modeled as a random effect, leads to biased parameter estimates. Additionally, we use simulation methods to compare maximum likelihood estimation to Bayesian estimation of the multilevel AR(1) model and investigate the trade-off between the number of individuals and the number of time points. We provide an empirical illustration by applying the extended multilevel AR(1) model to daily positive affect ratings from 89 married women over the course of 42 consecutive days.
Recommended Parameter Values for GENII Modeling of Radionuclides in Routine Air and Water Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Sandra F.; Arimescu, Carmen; Napier, Bruce A.
The GENII v2 code is used to estimate dose to individuals or populations from the release of radioactive materials into air or water. Numerous parameter values are required for input into this code. User-defined parameters cover the spectrum from chemical data, meteorological data, agricultural data, and behavioral data. This document is a summary of parameter values that reflect conditions in the United States. Reasonable regional and age-dependent data is summarized. Data availability and quality varies. The set of parameters described address scenarios for chronic air emissions or chronic releases to public waterways. Considerations for the special tritium and carbon-14 modelsmore » are briefly addressed. GENIIv2.10.0 is the current software version that this document supports.« less
Cella, Matteo; Bishara, Anthony J; Medin, Evelina; Swan, Sarah; Reeder, Clare; Wykes, Til
2014-11-01
Converging research suggests that individuals with schizophrenia show a marked impairment in reinforcement learning, particularly in tasks requiring flexibility and adaptation. The problem has been associated with dopamine reward systems. This study explores, for the first time, the characteristics of this impairment and how it is affected by a behavioral intervention-cognitive remediation. Using computational modelling, 3 reinforcement learning parameters based on the Wisconsin Card Sorting Test (WCST) trial-by-trial performance were estimated: R (reward sensitivity), P (punishment sensitivity), and D (choice consistency). In Study 1 the parameters were compared between a group of individuals with schizophrenia (n = 100) and a healthy control group (n = 50). In Study 2 the effect of cognitive remediation therapy (CRT) on these parameters was assessed in 2 groups of individuals with schizophrenia, one receiving CRT (n = 37) and the other receiving treatment as usual (TAU, n = 34). In Study 1 individuals with schizophrenia showed impairment in the R and P parameters compared with healthy controls. Study 2 demonstrated that sensitivity to negative feedback (P) and reward (R) improved in the CRT group after therapy compared with the TAU group. R and P parameter change correlated with WCST outputs. Improvements in R and P after CRT were associated with working memory gains and reduction of negative symptoms, respectively. Schizophrenia reinforcement learning difficulties negatively influence performance in shift learning tasks. CRT can improve sensitivity to reward and punishment. Identifying parameters that show change may be useful in experimental medicine studies to identify cognitive domains susceptible to improvement. © The Author 2013. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Carlsson, Kristin Cecilie; Hoem, Nils Ove; Glauser, Tracy; Vinks, Alexander A
2005-05-01
Population models can be important extensions of therapeutic drug monitoring (TDM), as they allow estimation of individual pharmacokinetic parameters based on a small number of measured drug concentrations. This study used a Bayesian approach to explore the utility of routinely collected and sparse TDM data (1 sample per patient) for carbamazepine (CBZ) monotherapy in developing a population pharmacokinetic (PPK) model for CBZ in pediatric patients that would allow prediction of CBZ concentrations for both immediate- and controlled-release formulations. Patient and TDM data were obtained from a pediatric neurology outpatient database. Data were analyzed using an iterative 2-stage Bayesian algorithm and a nonparametric adaptive grid algorithm. Models were compared by final log likelihood, mean error (ME) as a measure of bias, and root mean squared error (RMSE) as a measure of precision. Fifty-seven entries with data on CBZ monotherapy were identified from the database and used in the analysis (36 from males, 21 from females; mean [SD] age, 9.1 [4.4] years [range, 2-21 years]). Preliminary models estimating clearance (Cl) or the elimination rate constant (K(el)) gave good prediction of serum concentrations compared with measured serum concentrations, but estimates of Cl and K(el) were highly correlated with estimates of volume of distribution (V(d)). Different covariate models were then tested. The selected model had zero-order input and had age and body weight as covariates. Cl (L/h) was calculated as K(el) . V(d), where K(el) = [K(i) - (K(s) . age)] and V(d) = [V(i) + (V(s) . body weight)]. Median parameter estimates were V(i) (intercept) = 11.5 L (fixed); V(s) (slope) = 0.3957 L/kg (range, 0.01200-1.5730); K(i) (intercept) = 0.173 h(-1) (fixed); and K(s) (slope) = 0.004487 h(-1) . y(-1) (range, 0.0001800-0.02969). The fit was good for estimates of steady-state serum concentrations based on prior values (population median estimates) (R = 0.468; R(2) = 0.219) but was even better for predictions based on individual Bayesian posterior values (R(2) = 0.991), with little bias (ME = -0.079) and good precision (RMSE = 0.055). Based on the findings of this study, sparse TDM data can be used for PPK modeling of CBZ clearance in children with epilepsy, and these models can be used to predict Cl at steady state in pediatric patients. However, to estimate additional pharmacokinetic model parameters (eg, the absorption rate constant and V(d)), it would be necessary to combine sparse TDM data with additional well-timed samples. This would allow development of more informative PPK models that could be used as part of Bayesian dose-individualization strategies.
Luo, Laiping; Zhai, Qiuping; Su, Yanjun; Ma, Qin; Kelly, Maggi; Guo, Qinghua
2018-05-14
Crown base height (CBH) is an essential tree biophysical parameter for many applications in forest management, forest fuel treatment, wildfire modeling, ecosystem modeling and global climate change studies. Accurate and automatic estimation of CBH for individual trees is still a challenging task. Airborne light detection and ranging (LiDAR) provides reliable and promising data for estimating CBH. Various methods have been developed to calculate CBH indirectly using regression-based means from airborne LiDAR data and field measurements. However, little attention has been paid to directly calculate CBH at the individual tree scale in mixed-species forests without field measurements. In this study, we propose a new method for directly estimating individual-tree CBH from airborne LiDAR data. Our method involves two main strategies: 1) removing noise and understory vegetation for each tree; and 2) estimating CBH by generating percentile ranking profile for each tree and using a spline curve to identify its inflection points. These two strategies lend our method the advantages of no requirement of field measurements and being efficient and effective in mixed-species forests. The proposed method was applied to a mixed conifer forest in the Sierra Nevada, California and was validated by field measurements. The results showed that our method can directly estimate CBH at individual tree level with a root-mean-squared error of 1.62 m, a coefficient of determination of 0.88 and a relative bias of 3.36%. Furthermore, we systematically analyzed the accuracies among different height groups and tree species by comparing with field measurements. Our results implied that taller trees had relatively higher uncertainties than shorter trees. Our findings also show that the accuracy for CBH estimation was the highest for black oak trees, with an RMSE of 0.52 m. The conifer species results were also good with uniformly high R 2 ranging from 0.82 to 0.93. In general, our method has demonstrated high accuracy for individual tree CBH estimation and strong potential for applications in mixed species over large areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santhanagopalan, Shriram; White, Ralph E.
Rotating ring disc electrode (RRDE) experiments are a classic tool for investigating kinetics of electrochemical reactions. Several standardized methods exist for extracting transport parameters and reaction rate constants using RRDE measurements. Here in this work, we compare some approximate solutions to the convective diffusion used popularly in the literature to a rigorous numerical solution of the Nernst-Planck equations coupled to the three dimensional flow problem. In light of these computational advancements, we explore design aspects of the RRDE that will help improve sensitivity of our parameter estimation procedure to experimental data. We use the oxygen reduction in acidic media involvingmore » three charge transfer reactions and a chemical reaction as an example, and identify ways to isolate reaction currents for the individual processes in order to accurately estimate the exchange current densities.« less
[Modern principles of the geriatric analysis in medicine].
Volobuev, A N; Zaharova, N O; Romanchuk, N P; Romanov, D V; Romanchuk, P I; Adyshirin-Zade, K A
2016-01-01
The offered methodological principles of the geriatric analysis in medicine enables to plan economic parameters of social protection of the population, necessary amount of medical help financing, to define a structure of the qualified medical personnel training. It is shown that personal health and cognitive longevity of the person depend on the adequate system geriatric analysis and use of biological parameters monitoring in time. That allows estimate efficiency of the combined individual treatment. The geriatric analysis and in particular its genetic-mathematical component aimed at reliability and objectivity of an estimation of the person life expectancy in the country and in region due to the account of influence of mutagen factors as on a gene of the person during his live, and on a population as a whole.
Tutorial: Asteroseismic Stellar Modelling with AIMS
NASA Astrophysics Data System (ADS)
Lund, Mikkel N.; Reese, Daniel R.
The goal of aims (Asteroseismic Inference on a Massive Scale) is to estimate stellar parameters and credible intervals/error bars in a Bayesian manner from a set of asteroseismic frequency data and so-called classical constraints. To achieve reliable parameter estimates and computational efficiency, it searches through a grid of pre-computed models using an MCMC algorithm—interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modelling consist of individual frequencies from peak-bagging, which can be complemented with classical spectroscopic constraints. aims is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been rewritten in Fortran in order to speed up calculations.
Mueller, Christina J; White, Corey N; Kuchinke, Lars
2017-11-27
The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.
Social stress reactivity alters reward and punishment learning
Frank, Michael J.; Allen, John J. B.
2011-01-01
To examine how stress affects cognitive functioning, individual differences in trait vulnerability (punishment sensitivity) and state reactivity (negative affect) to social evaluative threat were examined during concurrent reinforcement learning. Lower trait-level punishment sensitivity predicted better reward learning and poorer punishment learning; the opposite pattern was found in more punishment sensitive individuals. Increasing state-level negative affect was directly related to punishment learning accuracy in highly punishment sensitive individuals, but these measures were inversely related in less sensitive individuals. Combined electrophysiological measurement, performance accuracy and computational estimations of learning parameters suggest that trait and state vulnerability to stress alter cortico-striatal functioning during reinforcement learning, possibly mediated via medio-frontal cortical systems. PMID:20453038
Mortality and the business cycle: Evidence from individual and aggregated data.
van den Berg, Gerard J; Gerdtham, Ulf G; von Hinke, Stephanie; Lindeboom, Maarten; Lissdaniels, Johannes; Sundquist, Jan; Sundquist, Kristina
2017-12-01
There has been much interest recently in the relationship between economic conditions and mortality, with some studies showing that mortality is pro-cyclical, while others find the opposite. Some suggest that the aggregation level of analysis (e.g. individual vs. regional) matters. We use both individual and aggregated data on a sample of 20-64 year-old Swedish men from 1993 to 2007. Our results show that the association between the business cycle and mortality does not depend on the level of analysis: the sign and magnitude of the parameter estimates are similar at the individual level and the aggregate (county) level; both showing pro-cyclical mortality. Copyright © 2017 Elsevier B.V. All rights reserved.
Social stress reactivity alters reward and punishment learning.
Cavanagh, James F; Frank, Michael J; Allen, John J B
2011-06-01
To examine how stress affects cognitive functioning, individual differences in trait vulnerability (punishment sensitivity) and state reactivity (negative affect) to social evaluative threat were examined during concurrent reinforcement learning. Lower trait-level punishment sensitivity predicted better reward learning and poorer punishment learning; the opposite pattern was found in more punishment sensitive individuals. Increasing state-level negative affect was directly related to punishment learning accuracy in highly punishment sensitive individuals, but these measures were inversely related in less sensitive individuals. Combined electrophysiological measurement, performance accuracy and computational estimations of learning parameters suggest that trait and state vulnerability to stress alter cortico-striatal functioning during reinforcement learning, possibly mediated via medio-frontal cortical systems.
Measuring global monopole velocities, one by one
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl
We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less
Space Use and Movement of a Neotropical Top Predator: The Endangered Jaguar
Stabach, Jared A.; Fleming, Chris H.; Calabrese, Justin M.; De Paula, Rogério C.; Ferraz, Kátia M. P. M.; Kantek, Daniel L. Z.; Miyazaki, Selma S.; Pereira, Thadeu D. C.; Araujo, Gediendson R.; Paviolo, Agustin; De Angelo, Carlos; Di Bitetti, Mario S.; Cruz, Paula; Lima, Fernando; Cullen, Laury; Sana, Denis A.; Ramalho, Emiliano E.; Carvalho, Marina M.; Soares, Fábio H. S.; Zimbres, Barbara; Silva, Marina X.; Moraes, Marcela D. F.; Vogliotti, Alexandre; May, Joares A.; Haberfeld, Mario; Rampim, Lilian; Sartorello, Leonardo; Ribeiro, Milton C.; Leimgruber, Peter
2016-01-01
Accurately estimating home range and understanding movement behavior can provide important information on ecological processes. Advances in data collection and analysis have improved our ability to estimate home range and movement parameters, both of which have the potential to impact species conservation. Fitting continuous-time movement model to data and incorporating the autocorrelated kernel density estimator (AKDE), we investigated range residency of forty-four jaguars fit with GPS collars across five biomes in Brazil and Argentina. We assessed home range and movement parameters of range resident animals and compared AKDE estimates with kernel density estimates (KDE). We accounted for differential space use and movement among individuals, sex, region, and habitat quality. Thirty-three (80%) of collared jaguars were range resident. Home range estimates using AKDE were 1.02 to 4.80 times larger than KDE estimates that did not consider autocorrelation. Males exhibited larger home ranges, more directional movement paths, and a trend towards larger distances traveled per day. Jaguars with the largest home ranges occupied the Atlantic Forest, a biome with high levels of deforestation and high human population density. Our results fill a gap in the knowledge of the species’ ecology with an aim towards better conservation of this endangered/critically endangered carnivore—the top predator in the Neotropics. PMID:28030568
González, R C; Alvarez, D; López, A M; Alvarez, J C
2009-12-01
It has been reported that spatio-temporal gait parameters can be estimated using an accelerometer to calculate the vertical displacement of the body's centre of gravity. This method has the potential to produce realistic ambulatory estimations of those parameters during unconstrained walking. In this work, we want to evaluate the crude estimations of mean step length so obtained, for their possible application in the construction of an ambulatory walking distance measurement device. Two methods have been tested with a set of volunteers in 20 m excursions. Experimental results show that estimations of walking distance can be obtained with sufficient accuracy and precision for most practical applications (errors of 3.66 +/- 6.24 and 0.96 +/- 5.55%), the main difficulty being inter-individual variability (biggest deviations of 19.70 and 15.09% for each estimator). Also, the results indicate that an inverted pendulum model for the displacement during the single stance phase, and a constant displacement per step during double stance, constitute a valid model for the travelled distance with no need of further adjustments. It allows us to explain the main part of the erroneous distance estimations in different subjects as caused by fundamental limitations of the simple inverted pendulum approach.
Space Use and Movement of a Neotropical Top Predator: The Endangered Jaguar.
Morato, Ronaldo G; Stabach, Jared A; Fleming, Chris H; Calabrese, Justin M; De Paula, Rogério C; Ferraz, Kátia M P M; Kantek, Daniel L Z; Miyazaki, Selma S; Pereira, Thadeu D C; Araujo, Gediendson R; Paviolo, Agustin; De Angelo, Carlos; Di Bitetti, Mario S; Cruz, Paula; Lima, Fernando; Cullen, Laury; Sana, Denis A; Ramalho, Emiliano E; Carvalho, Marina M; Soares, Fábio H S; Zimbres, Barbara; Silva, Marina X; Moraes, Marcela D F; Vogliotti, Alexandre; May, Joares A; Haberfeld, Mario; Rampim, Lilian; Sartorello, Leonardo; Ribeiro, Milton C; Leimgruber, Peter
2016-01-01
Accurately estimating home range and understanding movement behavior can provide important information on ecological processes. Advances in data collection and analysis have improved our ability to estimate home range and movement parameters, both of which have the potential to impact species conservation. Fitting continuous-time movement model to data and incorporating the autocorrelated kernel density estimator (AKDE), we investigated range residency of forty-four jaguars fit with GPS collars across five biomes in Brazil and Argentina. We assessed home range and movement parameters of range resident animals and compared AKDE estimates with kernel density estimates (KDE). We accounted for differential space use and movement among individuals, sex, region, and habitat quality. Thirty-three (80%) of collared jaguars were range resident. Home range estimates using AKDE were 1.02 to 4.80 times larger than KDE estimates that did not consider autocorrelation. Males exhibited larger home ranges, more directional movement paths, and a trend towards larger distances traveled per day. Jaguars with the largest home ranges occupied the Atlantic Forest, a biome with high levels of deforestation and high human population density. Our results fill a gap in the knowledge of the species' ecology with an aim towards better conservation of this endangered/critically endangered carnivore-the top predator in the Neotropics.
APOSTLE: 11 TRANSIT OBSERVATIONS OF TrES-3b
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kundurthy, P.; Becker, A. C.; Agol, E.
2013-02-10
The Apache Point Survey of Transit Lightcurves of Exoplanets (APOSTLE) observed 11 transits of TrES-3b over two years in order to constrain system parameters and look for transit timing and depth variations. We describe an updated analysis protocol for APOSTLE data, including the reduction pipeline, transit model, and Markov Chain Monte Carlo analyzer. Our estimates of the system parameters for TrES-3b are consistent with previous estimates to within the 2{sigma} confidence level. We improved the errors (by 10%-30%) on system parameters such as the orbital inclination (i {sub orb}), impact parameter (b), and stellar density ({rho}{sub *}) compared to previousmore » measurements. The near-grazing nature of the system, and incomplete sampling of some transits, limited our ability to place reliable uncertainties on individual transit depths and hence we do not report strong evidence for variability. Our analysis of the transit timing data shows no evidence for transit timing variations and our timing measurements are able to rule out super-Earth and gas giant companions in low-order mean motion resonance with TrES-3b.« less
An algorithm for intelligent sorting of CT-related dose parameters.
Cook, Tessa S; Zimmerman, Stefan L; Steingall, Scott R; Boonn, William W; Kim, Woojin
2012-02-01
Imaging centers nationwide are seeking innovative means to record and monitor computed tomography (CT)-related radiation dose in light of multiple instances of patient overexposure to medical radiation. As a solution, we have developed RADIANCE, an automated pipeline for extraction, archival, and reporting of CT-related dose parameters. Estimation of whole-body effective dose from CT dose length product (DLP)--an indirect estimate of radiation dose--requires anatomy-specific conversion factors that cannot be applied to total DLP, but instead necessitate individual anatomy-based DLPs. A challenge exists because the total DLP reported on a dose sheet often includes multiple separate examinations (e.g., chest CT followed by abdominopelvic CT). Furthermore, the individual reported series DLPs may not be clearly or consistently labeled. For example, "arterial" could refer to the arterial phase of the triple liver CT or the arterial phase of a CT angiogram. To address this problem, we have designed an intelligent algorithm to parse dose sheets for multi-series CT examinations and correctly separate the total DLP into its anatomic components. The algorithm uses information from the departmental PACS to determine how many distinct CT examinations were concurrently performed. Then, it matches the number of distinct accession numbers to the series that were acquired and anatomically matches individual series DLPs to their appropriate CT examinations. This algorithm allows for more accurate dose analytics, but there remain instances where automatic sorting is not feasible. To ultimately improve radiology patient care, we must standardize series names and exam names to unequivocally sort exams by anatomy and correctly estimate whole-body effective dose.
An algorithm for intelligent sorting of CT-related dose parameters
NASA Astrophysics Data System (ADS)
Cook, Tessa S.; Zimmerman, Stefan L.; Steingal, Scott; Boonn, William W.; Kim, Woojin
2011-03-01
Imaging centers nationwide are seeking innovative means to record and monitor CT-related radiation dose in light of multiple instances of patient over-exposure to medical radiation. As a solution, we have developed RADIANCE, an automated pipeline for extraction, archival and reporting of CT-related dose parameters. Estimation of whole-body effective dose from CT dose-length product (DLP)-an indirect estimate of radiation dose-requires anatomy-specific conversion factors that cannot be applied to total DLP, but instead necessitate individual anatomy-based DLPs. A challenge exists because the total DLP reported on a dose sheet often includes multiple separate examinations (e.g., chest CT followed by abdominopelvic CT). Furthermore, the individual reported series DLPs may not be clearly or consistently labeled. For example, Arterial could refer to the arterial phase of the triple liver CT or the arterial phase of a CT angiogram. To address this problem, we have designed an intelligent algorithm to parse dose sheets for multi-series CT examinations and correctly separate the total DLP into its anatomic components. The algorithm uses information from the departmental PACS to determine how many distinct CT examinations were concurrently performed. Then, it matches the number of distinct accession numbers to the series that were acquired, and anatomically matches individual series DLPs to their appropriate CT examinations. This algorithm allows for more accurate dose analytics, but there remain instances where automatic sorting is not feasible. To ultimately improve radiology patient care, we must standardize series names and exam names to unequivocally sort exams by anatomy and correctly estimate whole-body effective dose.
Lerner, Zachary F; DeMers, Matthew S; Delp, Scott L; Browning, Raymond C
2015-02-26
Understanding degeneration of biological and prosthetic knee joints requires knowledge of the in-vivo loading environment during activities of daily living. Musculoskeletal models can estimate medial/lateral tibiofemoral compartment contact forces, yet anthropometric differences between individuals make accurate predictions challenging. We developed a full-body OpenSim musculoskeletal model with a knee joint that incorporates subject-specific tibiofemoral alignment (i.e. knee varus-valgus) and geometry (i.e. contact locations). We tested the accuracy of our model and determined the importance of these subject-specific parameters by comparing estimated to measured medial and lateral contact forces during walking in an individual with an instrumented knee replacement and post-operative genu valgum (6°). The errors in the predictions of the first peak medial and lateral contact force were 12.4% and 11.9%, respectively, for a model with subject-specific tibiofemoral alignment and contact locations determined through radiographic analysis, vs. 63.1% and 42.0%, respectively, for a model with generic parameters. We found that each degree of tibiofemoral alignment deviation altered the first peak medial compartment contact force by 51N (r(2)=0.99), while each millimeter of medial-lateral translation of the compartment contact point locations altered the first peak medial compartment contact force by 41N (r(2)=0.99). The model, available at www.simtk.org/home/med-lat-knee/, enables the specification of subject-specific joint alignment and compartment contact locations to more accurately estimate medial and lateral tibiofemoral contact forces in individuals with non-neutral alignment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lerner, Zachary F.; DeMers, Matthew S.; Delp, Scott L.; Browning, Raymond C.
2015-01-01
Understanding degeneration of biological and prosthetic knee joints requires knowledge of the in-vivo loading environment during activities of daily living. Musculoskeletal models can estimate medial/lateral tibiofemoral compartment contact forces, yet anthropometric differences between individuals make accurate predictions challenging. We developed a full-body OpenSim musculoskeletal model with a knee joint that incorporates subject-specific tibiofemoral alignment (i.e. knee varus-valgus) and geometry (i.e. contact locations). We tested the accuracy of our model and determined the importance of these subject-specific parameters by comparing estimated to measured medial and lateral contact forces during walking in an individual with an instrumented knee replacement and post-operative genu valgum (6°). The errors in the predictions of the first peak medial and lateral contact force were 12.4% and 11.9%, respectively, for a model with subject-specific tibiofemoral alignment and contact locations determined via radiographic analysis, vs. 63.1% and 42.0%, respectively, for a model with generic parameters. We found that each degree of tibiofemoral alignment deviation altered the first peak medial compartment contact force by 51N (r2=0.99), while each millimeter of medial-lateral translation of the compartment contact point locations altered the first peak medial compartment contact force by 41N (r2=0.99). The model, available at www.simtk.org/home/med-lat-knee/, enables the specification of subject-specific joint alignment and compartment contact locations to more accurately estimate medial and lateral tibiofemoral contact forces in individuals with non-neutral alignment. PMID:25595425
Liang, Li-Jung; Weiss, Robert E; Redelings, Benjamin; Suchard, Marc A
2009-10-01
Statistical analyses of phylogenetic data culminate in uncertain estimates of underlying model parameters. Lack of additional data hinders the ability to reduce this uncertainty, as the original phylogenetic dataset is often complete, containing the entire gene or genome information available for the given set of taxa. Informative priors in a Bayesian analysis can reduce posterior uncertainty; however, publicly available phylogenetic software specifies vague priors for model parameters by default. We build objective and informative priors using hierarchical random effect models that combine additional datasets whose parameters are not of direct interest but are similar to the analysis of interest. We propose principled statistical methods that permit more precise parameter estimates in phylogenetic analyses by creating informative priors for parameters of interest. Using additional sequence datasets from our lab or public databases, we construct a fully Bayesian semiparametric hierarchical model to combine datasets. A dynamic iteratively reweighted Markov chain Monte Carlo algorithm conveniently recycles posterior samples from the individual analyses. We demonstrate the value of our approach by examining the insertion-deletion (indel) process in the enolase gene across the Tree of Life using the phylogenetic software BALI-PHY; we incorporate prior information about indels from 82 curated alignments downloaded from the BAliBASE database.
Reppas-Chrysovitsinos, Efstathios; Sobek, Anna; MacLeod, Matthew
2016-06-15
Polymeric materials flowing through the technosphere are repositories of organic chemicals throughout their life cycle. Equilibrium partition ratios of organic chemicals between these materials and air (KMA) or water (KMW) are required for models of fate and transport, high-throughput exposure assessment and passive sampling. KMA and KMW have been measured for a growing number of chemical/material combinations, but significant data gaps still exist. We assembled a database of 363 KMA and 910 KMW measurements for 446 individual compounds and nearly 40 individual polymers and biopolymers, collected from 29 studies. We used the EPI Suite and ABSOLV software packages to estimate physicochemical properties of the compounds and we employed an empirical correlation based on Trouton's rule to adjust the measured KMA and KMW values to a standard reference temperature of 298 K. Then, we used a thermodynamic triangle with Henry's law constant to calculate a complete set of 1273 KMA and KMW values. Using simple linear regression, we developed a suite of single parameter linear free energy relationship (spLFER) models to estimate KMA from the EPI Suite-estimated octanol-air partition ratio (KOA) and KMW from the EPI Suite-estimated octanol-water (KOW) partition ratio. Similarly, using multiple linear regression, we developed a set of polyparameter linear free energy relationship (ppLFER) models to estimate KMA and KMW from ABSOLV-estimated Abraham solvation parameters. We explored the two LFER approaches to investigate (1) their performance in estimating partition ratios, and (2) uncertainties associated with treating all different polymers as a single "bulk" polymeric material compartment. The models we have developed are suitable for screening assessments of the tendency for organic chemicals to be emitted from materials, and for use in multimedia models of the fate of organic chemicals in the indoor environment. In screening applications we recommend that KMA and KMW be modeled as 0.06 ×KOA and 0.06 ×KOW respectively, with an uncertainty range of a factor of 15.
Nonlinear dynamics applied to the study of cardiovascular effects of stress
NASA Astrophysics Data System (ADS)
Anishchenko, T. G.; Igosheva, N. B.
1998-03-01
We study cardiovascular responses to emotional stresses in humans and rats using traditional physiological parameters and methods of nonlinear dynamics. We found that emotional stress results in significant changes of chaos degree of ECG and blood pressure signals, estimated using a normalized entropy. We demonstrate that the normalized entropy is a more sensitive indicator of the stress-induced changes in cardiovascular systems compared with traditional physiological parameters Using the normalized entropy we discovered the significant individual differences in cardiovascular stress-reactivity that was impossible to obtain by traditional physiological methods.
Hladikova, K; Ruzickova, I; Klucova, P; Wanner, J
2002-01-01
This paper examines how the physicochemical characteristics of the solids are related to foam formation and describes how the foaming potential of full-scale plants can be assessed. The relations among activated sludge and biological foam hydrophobicity, scum index, aeration tank cover and filamentous population are evaluated. Individual parameter comparison reveals the scumming intensity can be estimated only on the assumption that foams is already established. None of the above mentioned characteristics can be reliably used to predict the foaming episodes at wastewater treatment plants.
Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne
2012-01-01
AIMS To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration–time curve (AUC) targeted dosage and individualize therapy. METHODS The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation–estimation method. RESULTS The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 l h−1 (RSE 6.3%), apparent central volume of distribution 4.94 l (RSE 28.7%), apparent peripheral volume of distribution 8.12 l (RSE14.2%), apparent intercompartment clearance 1.25 l h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. CONCLUSIONS The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC0–t was developed from the final model and can be used routinely to optimize individual dosing. PMID:21988586
Associations between feelings of social anxiety and emotion perception.
Lynn, Spencer K; Bui, Eric; Hoeppner, Susanne S; O'Day, Emily B; Palitz, Sophie A; Barrett, Lisa Feldman; Simon, Naomi M
2018-06-01
Abnormally biased perceptual judgment is a feature of many psychiatric disorders. Thus, individuals with social anxiety disorder are biased to recall or interpret social events negatively. Cognitive behavioral therapy addresses such bias by teaching patients, via verbal instruction, to become aware of and change pathological misjudgment. The present study examined whether targeting verbal instruction to specific decision parameters that influence perceptual judgment may affect changes in anger perception. We used a signal detection framework to decompose anger perception into three decision parameters (base rate of encountering anger vs. no-anger, payoff for correct vs. incorrect categorization of face stimuli, and perceptual similarity of angry vs. not-angry facial expressions). We created brief verbal instructions that emphasized each parameter separately. Participants with social anxiety disorder, generalized anxiety disorder, and healthy controls, were assigned to one of the three instruction conditions. We compared anger perception pre-vs. post-instruction. Base rate and payoff instructions affected response bias over and above practice effects, across the three groups. There was no interaction with diagnosis. The ability to target specific decision parameters that underlie perceptual judgment suggests that cognitive behavioral therapy might be improved by tailoring it to patients' individual parameter "estimation" deficits. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.
2016-12-01
Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.
Stature estimation from the lengths of the growing foot-a study on North Indian adolescents.
Krishan, Kewal; Kanchan, Tanuj; Passi, Neelam; DiMaggio, John A
2012-12-01
Stature estimation is considered as one of the basic parameters of the investigation process in unknown and commingled human remains in medico-legal case work. Race, age and sex are the other parameters which help in this process. Stature estimation is of the utmost importance as it completes the biological profile of a person along with the other three parameters of identification. The present research is intended to formulate standards for stature estimation from foot dimensions in adolescent males from North India and study the pattern of foot growth during the growing years. 154 male adolescents from the Northern part of India were included in the study. Besides stature, five anthropometric measurements that included the length of the foot from each toe (T1, T2, T3, T4, and T5 respectively) to pternion were measured on each foot. The data was analyzed statistically using Student's t-test, Pearson's correlation, linear and multiple regression analysis for estimation of stature and growth of foot during ages 13-18 years. Correlation coefficients between stature and all the foot measurements were found to be highly significant and positively correlated. Linear regression models and multiple regression models (with age as a co-variable) were derived for estimation of stature from the different measurements of the foot. Multiple regression models (with age as a co-variable) estimate stature with greater accuracy than the regression models for 13-18 years age group. The study shows the growth pattern of feet in North Indian adolescents and indicates that anthropometric measurements of the foot and its segments are valuable in estimation of stature in growing individuals of that population. Copyright © 2012 Elsevier Ltd. All rights reserved.
A Vernacular for Linear Latent Growth Models
ERIC Educational Resources Information Center
Hancock, Gregory R.; Choi, Jaehwa
2006-01-01
In its most basic form, latent growth modeling (latent curve analysis) allows an assessment of individuals' change in a measured variable X over time. For simple linear models, as with other growth models, parameter estimates associated with the a construct (amount of X at a chosen temporal reference point) and b construct (growth in X per unit…
ERIC Educational Resources Information Center
Xu, Zeyu; Nichols, Austin
2010-01-01
The gold standard in making causal inference on program effects is a randomized trial. Most randomization designs in education randomize classrooms or schools rather than individual students. Such "clustered randomization" designs have one principal drawback: They tend to have limited statistical power or precision. This study aims to…
Taper models for commercial tree species in the northeastern United States
James A. Westfall; Charles T. Scott
2010-01-01
A new taper model was developed based on the switching taper model of Valentine and Gregoire; the most substantial changes were reformulation to incorporate estimated join points and modification of a switching function. Random-effects parameters were included that account for within-tree correlations and allow for customized calibration to each individual tree. The...
Accounting for ethnicity in recreation demand: a flexible count data approach
J. Michael Bowker; V.R. Leeworthy
1998-01-01
The authors examine ethnicity and individual trip-taking behavior associated with natural resource based recreation in the Florida Keys. Bowker and Leeworthy estimate trip demand using the travel cost method. They then extend this model with a varying parameter adaptation to test the congruency of' demand and economic value across white and Hispanic user subgroups...
Iterative integral parameter identification of a respiratory mechanics model.
Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey
2012-07-18
Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393
Korjus, Kristjan; Hebart, Martin N; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.
Jackson, Charlotte; Mangtani, Punam; Fine, Paul; Vynnycky, Emilia
2014-01-01
Background Changes in children’s contact patterns between termtime and school holidays affect the transmission of several respiratory-spread infections. Transmission of varicella zoster virus (VZV), the causative agent of chickenpox, has also been linked to the school calendar in several settings, but temporal changes in the proportion of young children attending childcare centres may have influenced this relationship. Methods We used two modelling methods (a simple difference equations model and a Time series Susceptible Infectious Recovered (TSIR) model) to estimate fortnightly values of a contact parameter (the per capita rate of effective contact between two specific individuals), using GP consultation data for chickenpox in England and Wales from 1967–2008. Results The estimated contact parameters were 22–31% lower during the summer holiday than during termtime. The relationship between the contact parameter and the school calendar did not change markedly over the years analysed. Conclusions In England and Wales, reductions in contact between children during the school summer holiday lead to a reduction in the transmission of VZV. These estimates are relevant for predicting how closing schools and nurseries may affect an outbreak of an emerging respiratory-spread pathogen. PMID:24932994
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2010-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will make use of distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. Research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique and validating this technique through simulation and flight test of the X-48B aircraft. The X-48B aircraft is an 8.5 percent-scale hybrid wing body aircraft demonstrator designed by The Boeing Company (Chicago, Illinois, USA), built by Cranfield Aerospace Limited (Cranfield, Bedford, United Kingdom) and flight tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California, USA). Based on data from flight test maneuvers performed at Dryden Flight Research Center, aerodynamic parameter estimation was performed using linear regression and output error techniques. An input design technique that uses temporal separation for de-correlation of control surfaces is proposed, and simulation and flight test results are compared with the aerodynamic database. This paper will present a method to determine individual control surface aerodynamic derivatives.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
Meyer, Karin; Kirkpatrick, Mark
2005-01-01
Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1)/2 to m(2k - m + 1)/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given. PMID:15588566
Ecological risk assessment in a large river-reservoir. 5: Aerial insectivorous wildlife
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baron, L.A.; Sample, B.E.; Suter, G.W. II
Risks to aerial insectivores (e.g., rough-winged swallows, little brown bats, and endangered gray bats) were assessed for the remedial investigation of the Clinch River/Poplar Creek (CR/PC) system. Adult mayflies and sediment were collected from three locations and analyzed for contaminants. Sediment-to-mayfly contaminant uptake factors were generated from these data and used to estimate contaminant concentrations in mayflies from 13 additional locations. Contaminants of potential ecological concern (COPECs) were identified by comparing exposure estimates generated using point estimates of parameter values to NOAELs. To incorporate the variation in exposure parameters and to provide a better estimate of the potential exposure, themore » exposure model was recalculated using Monte Carlo methods. The potential for adverse effects was estimated based on the comparison of exposure distribution and the LOAEL. The results of this assessment suggested that population-level effects to rough-winged swallows and little brown bats are considered unlikely. However, because gray bats are endangered, effects on individuals may be significant from foraging in limited subreaches of the CR/PC system. This assessment illustrates the advantage of an iterative approach to ecological risk assessments, using fewer conservative assumptions and more realistic modeling of exposure.« less
Estimating the biophysical properties of neurons with intracellular calcium dynamics.
Ye, Jingxin; Rozdeba, Paul J; Morone, Uriel I; Daou, Arij; Abarbanel, Henry D I
2014-06-01
We investigate the dynamics of a conductance-based neuron model coupled to a model of intracellular calcium uptake and release by the endoplasmic reticulum. The intracellular calcium dynamics occur on a time scale that is orders of magnitude slower than voltage spiking behavior. Coupling these mechanisms sets the stage for the appearance of chaotic dynamics, which we observe within certain ranges of model parameter values. We then explore the question of whether one can, using observed voltage data alone, estimate the states and parameters of the voltage plus calcium (V+Ca) dynamics model. We find the answer is negative. Indeed, we show that voltage plus another observed quantity must be known to allow the estimation to be accurate. We show that observing both the voltage time course V(t) and the intracellular Ca time course will permit accurate estimation, and from the estimated model state, accurate prediction after observations are completed. This sets the stage for how one will be able to use a more detailed model of V+Ca dynamics in neuron activity in the analysis of experimental data on individual neurons as well as functional networks in which the nodes (neurons) have these biophysical properties.
Estimating the biophysical properties of neurons with intracellular calcium dynamics
NASA Astrophysics Data System (ADS)
Ye, Jingxin; Rozdeba, Paul J.; Morone, Uriel I.; Daou, Arij; Abarbanel, Henry D. I.
2014-06-01
We investigate the dynamics of a conductance-based neuron model coupled to a model of intracellular calcium uptake and release by the endoplasmic reticulum. The intracellular calcium dynamics occur on a time scale that is orders of magnitude slower than voltage spiking behavior. Coupling these mechanisms sets the stage for the appearance of chaotic dynamics, which we observe within certain ranges of model parameter values. We then explore the question of whether one can, using observed voltage data alone, estimate the states and parameters of the voltage plus calcium (V+Ca) dynamics model. We find the answer is negative. Indeed, we show that voltage plus another observed quantity must be known to allow the estimation to be accurate. We show that observing both the voltage time course V (t) and the intracellular Ca time course will permit accurate estimation, and from the estimated model state, accurate prediction after observations are completed. This sets the stage for how one will be able to use a more detailed model of V+Ca dynamics in neuron activity in the analysis of experimental data on individual neurons as well as functional networks in which the nodes (neurons) have these biophysical properties.
2011-01-01
Background Electronic patient records are generally coded using extensive sets of codes but the significance of the utilisation of individual codes may be unclear. Item response theory (IRT) models are used to characterise the psychometric properties of items included in tests and questionnaires. This study asked whether the properties of medical codes in electronic patient records may be characterised through the application of item response theory models. Methods Data were provided by a cohort of 47,845 participants from 414 family practices in the UK General Practice Research Database (GPRD) with a first stroke between 1997 and 2006. Each eligible stroke code, out of a set of 202 OXMIS and Read codes, was coded as either recorded or not recorded for each participant. A two parameter IRT model was fitted using marginal maximum likelihood estimation. Estimated parameters from the model were considered to characterise each code with respect to the latent trait of stroke diagnosis. The location parameter is referred to as a calibration parameter, while the slope parameter is referred to as a discrimination parameter. Results There were 79,874 stroke code occurrences available for analysis. Utilisation of codes varied between family practices with intraclass correlation coefficients of up to 0.25 for the most frequently used codes. IRT analyses were restricted to 110 Read codes. Calibration and discrimination parameters were estimated for 77 (70%) codes that were endorsed for 1,942 stroke patients. Parameters were not estimated for the remaining more frequently used codes. Discrimination parameter values ranged from 0.67 to 2.78, while calibration parameters values ranged from 4.47 to 11.58. The two parameter model gave a better fit to the data than either the one- or three-parameter models. However, high chi-square values for about a fifth of the stroke codes were suggestive of poor item fit. Conclusion The application of item response theory models to coded electronic patient records might potentially contribute to identifying medical codes that offer poor discrimination or low calibration. This might indicate the need for improved coding sets or a requirement for improved clinical coding practice. However, in this study estimates were only obtained for a small proportion of participants and there was some evidence of poor model fit. There was also evidence of variation in the utilisation of codes between family practices raising the possibility that, in practice, properties of codes may vary for different coders. PMID:22176509
Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen
2013-10-01
Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures. Copyright © 2013 Elsevier B.V. All rights reserved.
Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Leemput, Koen Van
2013-01-01
Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer’s disease classification task. As an additional benefit, the technique also allows one to compute informative “error bars” on the volume estimates of individual structures. PMID:23773521
Estimation of population trajectories from count data
Link, W.A.; Sauer, J.R.
1997-01-01
Monitoring of changes in animal population size is rarely possible through complete censuses; frequently, the only feasible means of monitoring changes in population size is to use counts of animals obtained by skilled observers as indices to abundance. Analysis of changes in population size can be severely biased if factors related to the acquisition of data are not adequately controlled for. In particular we identify two types of observer effects: these correspond to baseline differences in observer competence, and to changes through time in the ability of individual observers. We present a family of models for count data in which the first of these observer effects is treated as a nuisance parameter. Conditioning on totals of negative binomial counts yields a Dirichlet compound multinomial vector for each observer. Quasi-likelihood is used to estimate parameters related to population trajectory and other parameters of interest; model selection is carried out on the basis of Akaike's information criterion. An example is presented using data on Wood thrush from the North American Breeding Bird Survey.
Improvements in clathrate modelling: I. The H 2O-CO 2 system with various salts
NASA Astrophysics Data System (ADS)
Bakker, Ronald J.; Dubessy, Jean; Cathelineau, Michel
1996-05-01
The formation of clathrates in fluid inclusions during microthermometric measurements is typical for most natural fluid systems which include a mixture of H 2O, gases, and electrolytes. A general model is proposed which gives a complete description of the CO 2 clathrate stability field between 253-293 K and 0-200 MPa, and which can be applied to NaCl, KCl, and CaCl 2 bearing systems. The basic concept of the model is the equality of the chemical potential of H 2O in coexisting phases, after classical clathrate modelling. None of the original clathrate models had used a complete set of the most accurate values for the many parameters involved. The lack of well-defined standard conditions and of a thorough error analysis resulted in inaccurate estimation of clathrate stability conditions. According to our modifications which include the use of the most accurate parameters available, the semi-empirical model for the binary H 2O-CO 2 system is improved by the estimation of numerically optimised Kihara parameters σ = 365.9 pm and ɛ/k = 174.44 K at low pressures, and σ = 363.92 pm and e/k = 174.46 K at high pressures. Including the error indications of individual parameters involved in clathrate modelling, a range of 365.08-366.52 pm and 171.3-177.8 K allows a 2% accuracy in the modelled CO 2 clathrate formation pressure at selected temperatures below Q 2 conditions. A combination of the osmotic coefficient for binary salt-H 2O systems and Henry's constant for gas-H 2O systems is sufficiently accurate to estimate the activity of H 2O in aqueous solutions and the stability conditions of clathrate in electrolyte-bearing systems. The available data on salt-bearing systems is inconsistent, but our improved clathrate stability model is able to reproduce average values. The proposed modifications in clathrate modelling can be used to perform more accurate estimations of bulk density and composition of individual fluid inclusions from clathrate melting temperatures. Our model is included in several computer programs which can be applied to fluid inclusion studies.
Genomic Quantitative Genetics to Study Evolution in the Wild.
Gienapp, Phillip; Fior, Simone; Guillaume, Frédéric; Lasky, Jesse R; Sork, Victoria L; Csilléry, Katalin
2017-12-01
Quantitative genetic theory provides a means of estimating the evolutionary potential of natural populations. However, this approach was previously only feasible in systems where the genetic relatedness between individuals could be inferred from pedigrees or experimental crosses. The genomic revolution opened up the possibility of obtaining the realized proportion of genome shared among individuals in natural populations of virtually any species, which could promise (more) accurate estimates of quantitative genetic parameters in virtually any species. Such a 'genomic' quantitative genetics approach relies on fewer assumptions, offers a greater methodological flexibility, and is thus expected to greatly enhance our understanding of evolution in natural populations, for example, in the context of adaptation to environmental change, eco-evolutionary dynamics, and biodiversity conservation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Section-Based Tree Species Identification Using Airborne LIDAR Point Cloud
NASA Astrophysics Data System (ADS)
Yao, C.; Zhang, X.; Liu, H.
2017-09-01
The application of LiDAR data in forestry initially focused on mapping forest community, particularly and primarily intended for largescale forest management and planning. Then with the smaller footprint and higher sampling density LiDAR data available, detecting individual tree overstory, estimating crowns parameters and identifying tree species are demonstrated practicable. This paper proposes a section-based protocol of tree species identification taking palm tree as an example. Section-based method is to detect objects through certain profile among different direction, basically along X-axis or Y-axis. And this method improve the utilization of spatial information to generate accurate results. Firstly, separate the tree points from manmade-object points by decision-tree-based rules, and create Crown Height Mode (CHM) by subtracting the Digital Terrain Model (DTM) from the digital surface model (DSM). Then calculate and extract key points to locate individual trees, thus estimate specific tree parameters related to species information, such as crown height, crown radius, and cross point etc. Finally, with parameters we are able to identify certain tree species. Comparing to species information measured on ground, the portion correctly identified trees on all plots could reach up to 90.65 %. The identification result in this research demonstrate the ability to distinguish palm tree using LiDAR point cloud. Furthermore, with more prior knowledge, section-based method enable the process to classify trees into different classes.
Estimating Lion Abundance using N-mixture Models for Social Species
Belant, Jerrold L.; Bled, Florent; Wilton, Clay M.; Fyumagwa, Robert; Mwampeta, Stanslaus B.; Beyer, Dean E.
2016-01-01
Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170–551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species. PMID:27786283
Estimating Lion Abundance using N-mixture Models for Social Species.
Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E
2016-10-27
Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.
2014-01-01
Background This paper describes the “EMG Driven Force Estimator (EMGD-FE)”, a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. Results An example of the application’s functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. Conclusions The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues. PMID:24708668
Menegaldo, Luciano Luporini; de Oliveira, Liliam Fernandes; Minato, Kin K
2014-04-04
This paper describes the "EMG Driven Force Estimator (EMGD-FE)", a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. An example of the application's functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues.
García-Grajales, Jesús; Silva, Alejandra Buenrostro
2014-03-01
Population ecology of Crocodylus acutus (Reptilia: Crocodylidae) in Palmasola lagoon, Oaxaca, Mexico. Abundance and population structure are important parameters to evaluate and compare the conservation status of a population over time in a given area. This study describes the population abundance and structure of Crocodylus acutus in Palmasola lagoon, Oaxaca. The field works consisted of night surveys during the new moon phase, between the 21:00 and 24:00h. These were conducted during the dry and wet seasons and counted the number of individuals to obtain population estimates. Recorded encounter rates ranged from 32 to 109.3ind./ km in 40 journeys deployed with an average time of 18 minutes browsing. The estimated population size using the Messel's model ranged from 32.7 to 93 individuals. For both seasons, there was a marked dominance of subadults, followed by juveniles and to a lesser extent adult individuals, as well as undetermined individuals (i.e. unknown body/size/length), in both seasons. There was also a significant association with mangrove areas (26.1%) by juveniles; the subadults's individual use of superficial water (22.7%) and mangrove areas (15.7%); meanwhile the adults were observed on superficial water (9.7%). This information contributes to our understanding of the population ecology of C. acutus in the Palmasola lagoon where the estimated population size seems to show higher values when compared to other reports in the country.
Developing population models with data from marked individuals
Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,
2016-01-01
Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.
Technical note: Bayesian calibration of dynamic ruminant nutrition models.
Reed, K F; Arhonditsis, G B; France, J; Kebreab, E
2016-08-01
Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data
Yang, Yan; Simpson, Douglas
2010-01-01
Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950
Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation
NASA Technical Reports Server (NTRS)
Maine, R. E.
1981-01-01
The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.
A spatial mark–resight model augmented with telemetry data
Sollmann, Rachel; Gardner, Beth; Parsons, Arielle W.; Stocking, Jessica J.; McClintock, Brett T.; Simons, Theodore R.; Pollock, Kenneth H.; O’Connell, Allan F.
2013-01-01
Abundance and population density are fundamental pieces of information for population ecology and species conservation, but they are difficult to estimate for rare and elusive species. Mark-resight models are popular for estimating population abundance because they are less invasive and expensive than traditional mark-recapture. However, density estimation using mark-resight is difficult because the area sampled must be explicitly defined, historically using ad-hoc approaches. We develop a spatial mark-resight model for estimating population density that combines spatial resighting data and telemetry data. Incorporating telemetry data allows us to inform model parameters related to movement and individual location. Our model also allows 2. The model presented here will have widespread utility in future applications, especially for species that are not naturally marked.
Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes.
García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh; Lema, Juan M; Rodríguez, Jorge; Steyer, Jean-Philippe; Torrijos, Michel
2015-01-01
A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowly biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 gVS/Ld. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task.
Fitzpatrick, C M; Caballero-Puntiverio, M; Gether, U; Habekost, T; Bundesen, C; Vangkilde, S; Woldbye, D P D; Andreasen, J T; Petersen, A
2017-03-01
The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen's Theory of Visual Attention (TVA) to estimate visual processing speeds and other parameters of attentional capacity. We aimed to bridge these research fields by modifying the 5-CSRTT's design and by mathematically modelling data to derive attentional parameters analogous to human TVA-based measures. C57BL/6 mice were tested in two 1-h sessions on consecutive days with a version of the 5-CSRTT where stimulus duration (SD) probe length was varied based on information from previous TVA studies. Thereafter, a scopolamine hydrobromide (HBr; 0.125 or 0.25 mg/kg) pharmacological challenge was undertaken, using a Latin square design. Mean score values were modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual thresholds and motor response baselines. This study shows for the first time how 5-CSRTT performance in mice can be mathematically modelled to yield estimates of attentional capacity that are directly comparable to estimates from human studies.
Laser photogrammetry improves size and demographic estimates for whale sharks
Richardson, Anthony J.; Prebble, Clare E.M.; Marshall, Andrea D.; Bennett, Michael B.; Weeks, Scarla J.; Cliff, Geremy; Wintner, Sabine P.; Pierce, Simon J.
2015-01-01
Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432–917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420–990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347–1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (<400 cm) whale sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species. PMID:25870776
External dose assessment in the Ukraine following the Chernobyl accident
NASA Astrophysics Data System (ADS)
Frazier, Remi Jordan Lesartre
While the physiological effects of radiation exposure have been well characterized in general, it remains unclear what the relationship is between large-scale radiological events and psychosocial behavior outcomes in individuals or populations. To investigate this, the National Science Foundation funded a research project in 2008 at the University of Colorado in collaboration with Colorado State University to expand the knowledge of complex interactions between radiation exposure, perception of risk, and psychosocial behavior outcomes by modeling outcomes for a representative sample of the population of the Ukraine which had been exposed to radiocontaminant materials released by the reactor accident at Chernobyl on 26 April 1986. In service of this project, a methodology (based substantially on previously published models specific to the Chernobyl disaster and the Ukrainian population) was developed for daily cumulative effective external dose and dose rate assessment for individuals in the Ukraine for as a result of the Chernobyl disaster. A software platform was designed and produced to estimate effective external dose and dose rate for individuals based on their age, occupation, and location of residence on each day between 26 April 1986 and 31 December 2009. A methodology was developed to transform published 137Cs soil deposition contour maps from the Comprehensive Atlas of Caesium Deposition on Europe after the Chernobyl Accident into a geospatial database to access these data as a radiological source term. Cumulative effective external dose and dose rate were computed for each individual in a 703-member cohort of Ukrainians randomly selected to be representative of the population of the country as a whole. Error was estimated for the resulting individual dose and dose rate values with Monte Carlo simulations. Distributions of input parameters for the dose assessment methodology were compared to computed dose and dose rate estimates to determine which parameters were driving the computed results. The mean external effective dose for all individuals in the cohort due to exposure to radiocontamination from the Chernobyl accident between 26 April 1986 and 31 December 2009 was found to be 1.2 mSv; the geometric mean was 0.84 mSv with a geometric standard deviation of 2.1. The mean value is well below the mean external effective dose expected due to typical background radiation (which in the United States over this time period would be 12.0 mSv). Sensitivity analysis suggests that the greatest driver of the distribution of individual dose estimates is lack of specific information about the daily behavior of each individual, specifically the portion of time each individual spent indoors (and shielded from radionuclides deposited on the soil) versus outdoors (and unshielded).
Restoration of acidic mine spoils with sewage sludge: II measurement of solids applied
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stucky, D.J.; Zoeller, A.L.
1980-01-01
Sewage sludge was incorporated in acidic strip mine spoils at rates equivalent to 0, 224, 336, and 448 dry metric tons (dmt)/ha and placed in pots in a greenhouse. Spoil parameters were determined 48 hours after sludge incorporation, Time Planting (P), and five months after orchardgrass (Dactylis glomerata L.) was planted, Time Harvest (H), in the pots. Parameters measured were: pH, organic matter content (OM), cation exchange capacity (CEC), electrical conductivity (EC) and yield. Values for each parameter were significantly different at the two sampling times. Correlation coefficient values were calculated for all parameters versus rates of applied sewage sludgemore » and all parameters versus each other. Multiple regressions were performed, stepwise, for all parameters versus rates of applied sewage sludge. Equations to predict amounts of sewage sludge incorporated in spoils were derived for individual and multiple parameters. Generally, measurements made at Time P achieved the highest correlation coefficient and multiple correlation coefficient values; therefore, the authors concluded data from Time P had the greatest predictability value. The most important value measured to predict rate of applied sewage sludge was pH and some additional accuracy was obtained by including CEC in equation. This experiment indicated that soil properties can be used to estimate amounts of sewage sludge solids required to reclaim acidic mine spoils and to estimate quantities incorporated.« less
Dudaniec, Rachael Y; Worthington Wilmer, Jessica; Hanson, Jeffrey O; Warren, Matthew; Bell, Sarah; Rhodes, Jonathan R
2016-01-01
Landscape genetics lacks explicit methods for dealing with the uncertainty in landscape resistance estimation, which is particularly problematic when sample sizes of individuals are small. Unless uncertainty can be quantified, valuable but small data sets may be rendered unusable for conservation purposes. We offer a method to quantify uncertainty in landscape resistance estimates using multimodel inference as an improvement over single model-based inference. We illustrate the approach empirically using co-occurring, woodland-preferring Australian marsupials within a common study area: two arboreal gliders (Petaurus breviceps, and Petaurus norfolcensis) and one ground-dwelling antechinus (Antechinus flavipes). First, we use maximum-likelihood and a bootstrap procedure to identify the best-supported isolation-by-resistance model out of 56 models defined by linear and non-linear resistance functions. We then quantify uncertainty in resistance estimates by examining parameter selection probabilities from the bootstrapped data. The selection probabilities provide estimates of uncertainty in the parameters that drive the relationships between landscape features and resistance. We then validate our method for quantifying uncertainty using simulated genetic and landscape data showing that for most parameter combinations it provides sensible estimates of uncertainty. We conclude that small data sets can be informative in landscape genetic analyses provided uncertainty can be explicitly quantified. Being explicit about uncertainty in landscape genetic models will make results more interpretable and useful for conservation decision-making, where dealing with uncertainty is critical. © 2015 John Wiley & Sons Ltd.
O’Donnell, Katherine M.; Thompson, Frank R.; Semlitsch, Raymond D.
2015-01-01
Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model’s potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3–5 surveys each spring and fall 2010–2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability. PMID:25775182
PCAN: Probabilistic Correlation Analysis of Two Non-normal Data Sets
Zoh, Roger S.; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S.; Lampe, Johanna W.; Carroll, Raymond J.
2016-01-01
Summary Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. PMID:27037601
PCAN: Probabilistic correlation analysis of two non-normal data sets.
Zoh, Roger S; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S; Lampe, Johanna W; Carroll, Raymond J
2016-12-01
Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. © 2016, The International Biometric Society.
Nonlinear mixed effects modeling of the diurnal blood pressure profile in a multiracial population.
van Rijn-Bikker, Petra C; Snelder, Nelleke; Ackaert, Oliver; van Hest, Reinier M; Ploeger, Bart A; van Montfrans, Gert A; Koopmans, Richard P; Mathôt, Ron A
2013-09-01
Cardiac and cerebrovascular events in hypertensive patients are related to specific features of the 24-hour diurnal blood pressure (BP) profile (i.e., daytime and nighttime BP, nocturnal dip (ND), and morning surge (MS)). This investigation aimed to characterize 24-hour diurnal systolic BP (SBP) with parameters that correlate directly with daytime and nighttime SBP, ND, and MS using nonlinear mixed effects modeling. Ambulatory 24-hour SBP measurements (ABPM) of 196 nontreated subjects from three ethnic groups were available. A population model was parameterized in NONMEM to estimate and evaluate the parameters baseline SBP (BSL), nadir (minimum SBP during the night), and change (SBP difference between day and night). Associations were tested between these parameters and patient-related factors to explain interindividual variability. The diurnal SBP profile was adequately described as the sum of 2 cosine functions. The following typical values (interindividual variability) were found: BSL = 139 mm Hg (11%); nadir = 122 mm Hg (14%); change = 25 mm Hg (52%), and residual error = 12 mm Hg. The model parameters correlate well with daytime and nighttime SBP, ND, and MS (R (2) = 0.50-0.92). During covariate analysis, ethnicity was found to be associated with change; change was 40% higher in white Dutch subjects and 26.8% higher in South Asians than in blacks. The developed population model allows simultaneous estimation of BSL, nadir, and change for all individuals in the investigated population, regardless of individual number of SBP measurements. Ethnicity was associated with change. The model provides a tool to evaluate and optimize the sampling frequency for 24-hour ABPM.
More data, less information? Potential for nonmonotonic information growth using GEE.
Shoben, Abigail B; Rudser, Kyle D; Emerson, Scott S
2017-01-01
Statistical intuition suggests that increasing the total number of observations available for analysis should increase the precision with which parameters can be estimated. Such monotonic growth of statistical information is of particular importance when data are analyzed sequentially, such as in confirmatory clinical trials. However, monotonic information growth is not always guaranteed, even when using a valid, but inefficient estimator. In this article, we demonstrate the theoretical possibility of nonmonotonic information growth when using generalized estimating equations (GEE) to estimate a slope and provide intuition for why this possibility exists. We use theoretical and simulation-based results to characterize situations that may result in nonmonotonic information growth. Nonmonotonic information growth is most likely to occur when (1) accrual is fast relative to follow-up on each individual, (2) correlation among measurements from the same individual is high, and (3) measurements are becoming more variable further from randomization. In situations that may lead to nonmonotonic information growth, study designers should plan interim analyses to avoid situations most likely to result in nonmonotonic information growth.
Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario.
Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao
2016-11-22
Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals' average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day's WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas.
Tran, Phuong; Yoo, Hee-Doo; Ngo, Lien; Cho, Hea-Young; Lee, Yong-Bok
2017-12-01
The objective of this study was to perform population pharmacokinetic (PK) analysis of gabapentin in healthy Korean subjects and to investigate the possible effect of genetic polymorphisms (1236C > T, 2677G > T/A, and 3435C > T) of ABCB1 gene on PK parameters of gabapentin. Data were collected from bioequivalence studies, in which 173 subjects orally received three different doses of gabapentin (300, 400, and 800 mg). Only data from reference formulation were used. Population pharmacokinetics (PKs) of gabapentin was estimated using a nonlinear mixed-effects model (NONMEM). Gabapentin showed considerable inter-individual variability (from 5.2- to 8.7-fold) in PK parameters. Serum concentration of gabapentin was well fitted by a one-compartment model with first-order absorption and lag time. An inhibitory Emax model was applied to describe the effect of dose on bioavailability. The oral clearance was estimated to be 11.1 L/h. The volume of distribution was characterized as 81.0 L. The absorption rate constant was estimated at 0.860 h -1 , and the lag time was predicted at 0.311 h. Oral bioavailability was estimated to be 68.8% at dose of 300 mg, 62.7% at dose of 400 mg, and 47.1% at dose of 800 mg. The creatinine clearance significantly influenced on the oral clearance (P < 0.005) and ABCB1 2677G > T/A genotypes significantly influenced on the absorption rate constant (P < 0.05) of gabapentin. However, ABCB1 1236C > T and 3435C > T genotypes showed no significant effect on gabapentin PK parameters. The results of the present study indicate that the oral bioavailability of gabapentin is decreased when its dosage is increased. In addition, ABCB1 2677G > T/A polymorphism can explain the substantial inter-individual variability in the absorption of gabapentin.
Roumet, M; Ostrowski, M-F; David, J; Tollon, C; Muller, M-H
2012-01-01
Cultivated plants have been molded by human-induced selection, including manipulations of the mating system in the twentieth century. How these manipulations have affected realized parameters of the mating system in freely evolving cultivated populations is of interest for optimizing the management of breeding populations, predicting the fate of escaped populations and providing material for experimental evolution studies. To produce modern varieties of sunflower (Helianthus annuus L.), self-incompatibility has been broken, recurrent generations of selfing have been performed and male sterility has been introduced. Populations deriving from hybrid-F1 varieties are gynodioecious because of the segregation of a nuclear restorer of male fertility. Using both phenotypic and genotypic data at 11 microsatellite loci, we analyzed the consanguinity status of plants of the first three generations of such a population and estimated parameters related to the mating system. We showed that the resource reallocation to seed in male-sterile individuals was not significant, that inbreeding depression on seed production averaged 15–20% and that cultivated sunflower had acquired a mixed-mating system, with ∼50% of selfing among the hermaphrodites. According to theoretical models, the female advantage and the inbreeding depression at the seed production stage were too low to allow the persistence of male sterility. We discuss our methods of parameter estimation and the potential of such study system in evolutionary biology. PMID:21915147
Tyne, Julian A.; Pollock, Kenneth H.; Johnston, David W.; Bejder, Lars
2014-01-01
Reliable population estimates are critical to implement effective management strategies. The Hawai’i Island spinner dolphin (Stenella longirostris) is a genetically distinct stock that displays a rigid daily behavioural pattern, foraging offshore at night and resting in sheltered bays during the day. Consequently, they are exposed to frequent human interactions and disturbance. We estimated population parameters of this spinner dolphin stock using a systematic sampling design and capture–recapture models. From September 2010 to August 2011, boat-based photo-identification surveys were undertaken monthly over 132 days (>1,150 hours of effort; >100,000 dorsal fin images) in the four main resting bays along the Kona Coast, Hawai’i Island. All images were graded according to photographic quality and distinctiveness. Over 32,000 images were included in the analyses, from which 607 distinctive individuals were catalogued and 214 were highly distinctive. Two independent estimates of the proportion of highly distinctive individuals in the population were not significantly different (p = 0.68). Individual heterogeneity and time variation in capture probabilities were strongly indicated for these data; therefore capture–recapture models allowing for these variations were used. The estimated annual apparent survival rate (product of true survival and permanent emigration) was 0.97 SE±0.05. Open and closed capture–recapture models for the highly distinctive individuals photographed at least once each month produced similar abundance estimates. An estimate of 221±4.3 SE highly distinctive spinner dolphins, resulted in a total abundance of 631±60.1 SE, (95% CI 524–761) spinner dolphins in the Hawai’i Island stock, which is lower than previous estimates. When this abundance estimate is considered alongside the rigid daily behavioural pattern, genetic distinctiveness, and the ease of human access to spinner dolphins in their preferred resting habitats, this Hawai’i Island stock is likely more vulnerable to negative impacts from human disturbance than previously believed. PMID:24465917
Tyne, Julian A; Pollock, Kenneth H; Johnston, David W; Bejder, Lars
2014-01-01
Reliable population estimates are critical to implement effective management strategies. The Hawai'i Island spinner dolphin (Stenella longirostris) is a genetically distinct stock that displays a rigid daily behavioural pattern, foraging offshore at night and resting in sheltered bays during the day. Consequently, they are exposed to frequent human interactions and disturbance. We estimated population parameters of this spinner dolphin stock using a systematic sampling design and capture-recapture models. From September 2010 to August 2011, boat-based photo-identification surveys were undertaken monthly over 132 days (>1,150 hours of effort; >100,000 dorsal fin images) in the four main resting bays along the Kona Coast, Hawai'i Island. All images were graded according to photographic quality and distinctiveness. Over 32,000 images were included in the analyses, from which 607 distinctive individuals were catalogued and 214 were highly distinctive. Two independent estimates of the proportion of highly distinctive individuals in the population were not significantly different (p = 0.68). Individual heterogeneity and time variation in capture probabilities were strongly indicated for these data; therefore capture-recapture models allowing for these variations were used. The estimated annual apparent survival rate (product of true survival and permanent emigration) was 0.97 SE ± 0.05. Open and closed capture-recapture models for the highly distinctive individuals photographed at least once each month produced similar abundance estimates. An estimate of 221 ± 4.3 SE highly distinctive spinner dolphins, resulted in a total abundance of 631 ± 60.1 SE, (95% CI 524-761) spinner dolphins in the Hawai'i Island stock, which is lower than previous estimates. When this abundance estimate is considered alongside the rigid daily behavioural pattern, genetic distinctiveness, and the ease of human access to spinner dolphins in their preferred resting habitats, this Hawai'i Island stock is likely more vulnerable to negative impacts from human disturbance than previously believed.
Goñi, Joaquín; Sporns, Olaf; Cheng, Hu; Aznárez-Sanado, Maite; Wang, Yang; Josa, Santiago; Arrondo, Gonzalo; Mathews, Vincent P; Hummer, Tom A; Kronenberger, William G; Avena-Koenigsberger, Andrea; Saykin, Andrew J.; Pastor, María A.
2013-01-01
High-resolution isotropic three-dimensional reconstructions of human brain gray and white matter structures can be characterized to quantify aspects of their shape, volume and topological complexity. In particular, methods based on fractal analysis have been applied in neuroimaging studies to quantify the structural complexity of the brain in both healthy and impaired conditions. The usefulness of such measures for characterizing individual differences in brain structure critically depends on their within-subject reproducibility in order to allow the robust detection of between-subject differences. This study analyzes key analytic parameters of three fractal-based methods that rely on the box-counting algorithm with the aim to maximize within-subject reproducibility of the fractal characterizations of different brain objects, including the pial surface, the cortical ribbon volume, the white matter volume and the grey matter/white matter boundary. Two separate datasets originating from different imaging centers were analyzed, comprising, 50 subjects with three and 24 subjects with four successive scanning sessions per subject, respectively. The reproducibility of fractal measures was statistically assessed by computing their intra-class correlations. Results reveal differences between different fractal estimators and allow the identification of several parameters that are critical for high reproducibility. Highest reproducibility with intra-class correlations in the range of 0.9–0.95 is achieved with the correlation dimension. Further analyses of the fractal dimensions of parcellated cortical and subcortical gray matter regions suggest robustly estimated and region-specific patterns of individual variability. These results are valuable for defining appropriate parameter configurations when studying changes in fractal descriptors of human brain structure, for instance in studies of neurological diseases that do not allow repeated measurements or for disease-course longitudinal studies. PMID:23831414
Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2011-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Classification of microscopy images of Langerhans islets
NASA Astrophysics Data System (ADS)
Å vihlík, Jan; Kybic, Jan; Habart, David; Berková, Zuzana; Girman, Peter; Kříž, Jan; Zacharovová, Klára
2014-03-01
Evaluation of images of Langerhans islets is a crucial procedure for planning an islet transplantation, which is a promising diabetes treatment. This paper deals with segmentation of microscopy images of Langerhans islets and evaluation of islet parameters such as area, diameter, or volume (IE). For all the available images, the ground truth and the islet parameters were independently evaluated by four medical experts. We use a pixelwise linear classifier (perceptron algorithm) and SVM (support vector machine) for image segmentation. The volume is estimated based on circle or ellipse fitting to individual islets. The segmentations were compared with the corresponding ground truth. Quantitative islet parameters were also evaluated and compared with parameters given by medical experts. We can conclude that accuracy of the presented fully automatic algorithm is fully comparable with medical experts.
Gingras, Bruno; Asselin, Pierre-Yves; McAdams, Stephen
2013-01-01
Although a growing body of research has examined issues related to individuality in music performance, few studies have attempted to quantify markers of individuality that transcend pieces and musical styles. This study aims to identify such meta-markers by discriminating between influences linked to specific pieces or interpretive goals and performer-specific playing styles, using two complementary statistical approaches: linear mixed models (LMMs) to estimate fixed (piece and interpretation) and random (performer) effects, and similarity analyses to compare expressive profiles on a note-by-note basis across pieces and expressive parameters. Twelve professional harpsichordists recorded three pieces representative of the Baroque harpsichord repertoire, including three interpretations of one of these pieces, each emphasizing a different melodic line, on an instrument equipped with a MIDI console. Four expressive parameters were analyzed: articulation, note onset asynchrony, timing, and velocity. LMMs showed that piece-specific influences were much larger for articulation than for other parameters, for which performer-specific effects were predominant, and that piece-specific influences were generally larger than effects associated with interpretive goals. Some performers consistently deviated from the mean values for articulation and velocity across pieces and interpretations, suggesting that global measures of expressivity may in some cases constitute valid markers of artistic individuality. Similarity analyses detected significant associations among the magnitudes of the correlations between the expressive profiles of different performers. These associations were found both when comparing across parameters and within the same piece or interpretation, or on the same parameter and across pieces or interpretations. These findings suggest the existence of expressive meta-strategies that can manifest themselves across pieces, interpretive goals, or expressive devices. PMID:24348446
Boucherie, Alexandra; Castex, Dominique; Polet, Caroline; Kacki, Sacha
2017-01-01
Harris lines (HLs) are defined as transverse, mineralized lines associated with temporary growth arrest. In paleopathology, HLs are used to reconstruct health status of past populations. However, their etiology is still obscure. The aim of this article is to test the reliability of HLs as an arrested growth marker by investigating their incidence on human metrical parameters. The study was performed on 69 individuals (28 adults, 41 subadults) from the Dendermonde plague cemetery (Belgium, 16th century). HLs were rated on distal femora and both ends of tibiae. Overall prevalence and age-at-formation of each detected lines were calculated. ANOVA analyses were conducted within subadult and adult samples to test if the presence of HLs did impact size and shape parameters of the individuals. At Dendermonde, 52% of the individuals had at least one HL. The age-at-formation was estimated between 5 and 9 years old for the subadults and between 10 and 14 years old for the adults. ANOVA analyses showed that the presence of HLs did not affect the size of the individuals. However, significant differences in shape parameters were highlighted by HL presence. Subadults with HLs displayed slighter shape parameters than the subadults without, whereas the adults with HLs had larger measurements than the adults without. The results suggest that HLs can have a certain impact on shape parameters. The underlying causes can be various, especially for the early formed HLs. However, HLs deposited around puberty are more likely to be physiological lines reflecting hormonal secretions. Am. J. Hum. Biol. 29:e22885, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Estimation of the transmission dynamics of African swine fever virus within a swine house.
Nielsen, J P; Larsen, T S; Halasa, T; Christiansen, L E
2017-10-01
The spread of African swine fever virus (ASFV) threatens to reach further parts of Europe. In countries with a large swine production, an outbreak of ASF may result in devastating economic consequences for the swine industry. Simulation models can assist decision makers setting up contingency plans. This creates a need for estimation of parameters. This study presents a new analysis of a previously published study. A full likelihood framework is presented including the impact of model assumptions on the estimated transmission parameters. As animals were only tested every other day, an interpretation was introduced to cover the weighted infectiousness on unobserved days for the individual animals (WIU). Based on our model and the set of assumptions, the within- and between-pen transmission parameters were estimated to β w = 1·05 (95% CI 0·62-1·72), β b = 0·46 (95% CI 0·17-1·00), respectively, and the WIU = 1·00 (95% CI 0-1). Furthermore, we simulated the spread of ASFV within a pig house using a modified SEIR-model to establish the time from infection of one animal until ASFV is detected in the herd. Based on a chosen detection limit of 2·55% equivalent to 10 dead pigs out of 360, the disease would be detected 13-19 days after introduction.
Women use voice parameters to assess men's characteristics
Bruckert, Laetitia; Liénard, Jean-Sylvain; Lacroix, André; Kreutzer, Michel; Leboucher, Gérard
2005-01-01
The purpose of this study was: (i) to provide additional evidence regarding the existence of human voice parameters, which could be reliable indicators of a speaker's physical characteristics and (ii) to examine the ability of listeners to judge voice pleasantness and a speaker's characteristics from speech samples. We recorded 26 men enunciating five vowels. Voices were played to 102 female judges who were asked to assess vocal attractiveness and speakers' age, height and weight. Statistical analyses were used to determine: (i) which physical component predicted which vocal component and (ii) which vocal component predicted which judgment. We found that men with low-frequency formants and small formant dispersion tended to be older, taller and tended to have a high level of testosterone. Female listeners were consistent in their pleasantness judgment and in their height, weight and age estimates. Pleasantness judgments were based mainly on intonation. Female listeners were able to correctly estimate age by using formant components. They were able to estimate weight but we could not explain which acoustic parameters they used. However, female listeners were not able to estimate height, possibly because they used intonation incorrectly. Our study confirms that in all mammal species examined thus far, including humans, formant components can provide a relatively accurate indication of a vocalizing individual's characteristics. Human listeners have the necessary information at their disposal; however, they do not necessarily use it. PMID:16519239
Friedrich, Reinhard E; Schmidt, Kirsten; Treszl, András; Kersten, Jan F
2016-01-01
Introduction: Surgical procedures require informed patient consent, which is mandatory prior to any procedure. These requirements apply in particular to elective surgical procedures. The communication with the patient about the procedure has to be comprehensive and based on mutual understanding. Furthermore, the informed consent has to take into account whether a patient is of legal age. As a result of large-scale migration, there are eventually patients planned for medical procedures, whose chronological age can't be assessed reliably by physical inspection alone. Age determination based on assessing wisdom tooth development stages can be used to help determining whether individuals involved in medical procedures are of legal age, i.e., responsible and accountable. At present, the assessment of wisdom tooth developmental stages barely allows a crude estimate of an individual's age. This study explores possibilities for more precise predictions of the age of individuals with emphasis on the legal age threshold of 18 years. Material and Methods: 1,900 dental orthopantomograms (female 938, male 962, age: 15-24 years), taken between the years 2000 and 2013 for diagnosis and treatment of diseases of the jaws, were evaluated. 1,895 orthopantomograms (female 935, male 960) of 1,804 patients (female 872, male 932) met the inclusion criteria. The archives of the Department of Diagnostic Radiology in Dentistry, University Medical Center Hamburg-Eppendorf, and of an oral and maxillofacial office in Rostock, Germany, were used to collect a sufficient number of radiographs. An effort was made to achieve almost equal distribution of age categories in this study group; 'age' was given on a particular day. The radiological criteria of lower third molar investigation were: presence and extension of periodontal space, alveolar bone loss, emergence of tooth, and stage of tooth mineralization (according to Demirjian). Univariate and multivariate general linear models were calculated. Using hierarchical multivariate analyses a formula was derived quantifying the development of the four parameters of wisdom tooth over time. This model took repeated measurements of the same persons into account and is only applicable when a person is assessed a second time. The second approach investigates a linear regression model in order to predict the age. In a third approach, a classification and regression tree (CART) was developed to derive cut-off values for the four parameters, resulting in a classification with estimates for sensitivity and specificity. Results: No statistically significant differences were found between parameters related to wisdom tooth localization (right or left side). In univariate analyses being of legal age was associated with consecutive stages of wisdom tooth development, the obliteration of the periodontal space, and tooth emergence, as well with alveolar bone loss; no association was found with tooth mineralization. Multivariate models without repeated measurements revealed imprecise estimates because of the unknown individual-related variability. The precision of these models is thus not very good, although it improves with advancing age. When calculating a CART-analysis and a receiver operating characteristics - area under the curve of 78% was achieved; when maximizing both specificity and sensitivity, a Youden's index of 47% was achieved (with 73% specificity and 74% sensitivity). Discussion: This study provides a basis to help determine whether a person is 18 years or older in individuals who are assumed to be between 15 and 24 years old. From repeated measurements, we found a linear effect of age on the four parameters in the individuals. However, this information can't be used for prognosis, because of the large intra-individual variability. Thus, although the development of the four parameters can be estimated over time, a direct conclusion with regard to age can't be drawn from the parameters without previous biographic information about a person. While a single parameter is of limited value for calculating the target age of 18 years, combining several findings, that can be determined on a standard radiography, may potentially be a more reliable diagnostic tool for estimating the target age in both sexes. However, a high degree of precision can't be achieved. The reason for persistent uncertainty lies in the wide chronological range of wisdom tooth development, which stretches from well below to above the 18 th life year. The regression approach thus seems not optimal. Although sensitivity and specificity of the CART-model are moderately high, this model is still not reliable as a diagnostic tool. Our findings could have impact, e.g. on elective surgeries for young individuals with unknown biography. However, these results cannot replace social engagement, in particular thorough physical examination of patients and careful registration of their histories. Further studies on the use of this calculation method in different ethnic groups would be desirable.
Liu, Dungang; Liu, Regina; Xie, Minge
2014-01-01
Meta-analysis has been widely used to synthesize evidence from multiple studies for common hypotheses or parameters of interest. However, it has not yet been fully developed for incorporating heterogeneous studies, which arise often in applications due to different study designs, populations or outcomes. For heterogeneous studies, the parameter of interest may not be estimable for certain studies, and in such a case, these studies are typically excluded from conventional meta-analysis. The exclusion of part of the studies can lead to a non-negligible loss of information. This paper introduces a metaanalysis for heterogeneous studies by combining the confidence density functions derived from the summary statistics of individual studies, hence referred to as the CD approach. It includes all the studies in the analysis and makes use of all information, direct as well as indirect. Under a general likelihood inference framework, this new approach is shown to have several desirable properties, including: i) it is asymptotically as efficient as the maximum likelihood approach using individual participant data (IPD) from all studies; ii) unlike the IPD analysis, it suffices to use summary statistics to carry out the CD approach. Individual-level data are not required; and iii) it is robust against misspecification of the working covariance structure of the parameter estimates. Besides its own theoretical significance, the last property also substantially broadens the applicability of the CD approach. All the properties of the CD approach are further confirmed by data simulated from a randomized clinical trials setting as well as by real data on aircraft landing performance. Overall, one obtains an unifying approach for combining summary statistics, subsuming many of the existing meta-analysis methods as special cases. PMID:26190875
Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T.; Brand, Thomas
2016-01-01
To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. PMID:27604782
Schick, Robert S; Kraus, Scott D; Rolland, Rosalind M; Knowlton, Amy R; Hamilton, Philip K; Pettis, Heather M; Kenney, Robert D; Clark, James S
2013-01-01
Body condition is an indicator of health, and it plays a key role in many vital processes for mammalian species. While evidence of individual body condition can be obtained, these observations provide just brief glimpses into the health state of the animal. An analytical framework is needed for understanding how health of animals changes over space and time.Through knowledge of individual health we can better understand the status of populations. This is particularly important in endangered species, where the consequences of disruption of critical biological functions can push groups of animals rapidly toward extinction. Here we built a state-space model that provides estimates of movement, health, and survival. We assimilated 30+ years of photographic evidence of body condition and three additional visual health parameters in individual North Atlantic right whales, together with survey data, to infer the true health status as it changes over space and time. We also included the effect of reproductive status and entanglement status on health. At the population level, we estimated differential movement patterns in males and females. At the individual level, we estimated the likely animal locations each month. We estimated the relationship between observed and latent health status. Observations of body condition, skin condition, cyamid infestation on the blowholes, and rake marks all provided measures of the true underlying health. The resulting time series of individual health highlight both normal variations in health status and how anthropogenic stressors can affect the health and, ultimately, the survival of individuals. This modeling approach provides information for monitoring of health in right whales, as well as a framework for integrating observational data at the level of individuals up through the health status of the population. This framework can be broadly applied to a variety of systems - terrestrial and marine - where sporadic observations of individuals exist.
Fabian C.C. Uzoh; Martin W. Ritchie
1996-01-01
The equations presented predict crown area for 13 species of trees and shrubs which may be found growing in competition with commercial conifers during early stages of stand development. The equations express crown area as a function of basal area and height. Parameters were estimated for each species individually using weighted nonlinear least square regression.
ERIC Educational Resources Information Center
Deboeck, Pascal R.; Boker, Steven M.; Bergeman, C. S.
2008-01-01
Among the many methods available for modeling intraindividual time series, differential equation modeling has several advantages that make it promising for applications to psychological data. One interesting differential equation model is that of the damped linear oscillator (DLO), which can be used to model variables that have a tendency to…
ERIC Educational Resources Information Center
Kalender, Ilker
2012-01-01
catcher is a software program designed to compute the [omega] index, a common statistical index for the identification of collusions (cheating) among examinees taking an educational or psychological test. It requires (a) responses and (b) ability estimations of individuals, and (c) item parameters to make computations and outputs the results of…
Hydrologic Process-oriented Optimization of Electrical Resistivity Tomography
NASA Astrophysics Data System (ADS)
Hinnell, A.; Bechtold, M.; Ferre, T. A.; van der Kruk, J.
2010-12-01
Electrical resistivity tomography (ERT) is commonly used in hydrologic investigations. Advances in joint and coupled hydrogeophysical inversion have enhanced the quantitative use of ERT to construct and condition hydrologic models (i.e. identify hydrologic structure and estimate hydrologic parameters). However the selection of which electrical resistivity data to collect and use is often determined by a combination of data requirements for geophysical analysis, intuition on the part of the hydrogeophysicist and logistical constraints of the laboratory or field site. One of the advantages of coupled hydrogeophysical inversion is the direct link between the hydrologic model and the individual geophysical data used to condition the model. That is, there is no requirement to collect geophysical data suitable for independent geophysical inversion. The geophysical measurements collected can be optimized for estimation of hydrologic model parameters rather than to develop a geophysical model. Using a synthetic model of drip irrigation we evaluate the value of individual resistivity measurements to describe the soil hydraulic properties and then use this information to build a data set optimized for characterizing hydrologic processes. We then compare the information content in the optimized data set with the information content in a data set optimized using a Jacobian sensitivity analysis.
Pattern formation in individual-based systems with time-varying parameters
NASA Astrophysics Data System (ADS)
Ashcroft, Peter; Galla, Tobias
2013-12-01
We study the patterns generated in finite-time sweeps across symmetry-breaking bifurcations in individual-based models. Similar to the well-known Kibble-Zurek scenario of defect formation, large-scale patterns are generated when model parameters are varied slowly, whereas fast sweeps produce a large number of small domains. The symmetry breaking is triggered by intrinsic noise, originating from the discrete dynamics at the microlevel. Based on a linear-noise approximation, we calculate the characteristic length scale of these patterns. We demonstrate the applicability of this approach in a simple model of opinion dynamics, a model in evolutionary game theory with a time-dependent fitness structure, and a model of cell differentiation. Our theoretical estimates are confirmed in simulations. In further numerical work, we observe a similar phenomenon when the symmetry-breaking bifurcation is triggered by population growth.
REVERBERATION AND PHOTOIONIZATION ESTIMATES OF THE BROAD-LINE REGION RADIUS IN LOW-z QUASARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Negrete, C. Alenka; Dultzin, Deborah; Marziani, Paola
2013-07-01
Black hole mass estimation in quasars, especially at high redshift, involves the use of single-epoch spectra with signal-to-noise ratio and resolution that permit accurate measurement of the width of a broad line assumed to be a reliable virial estimator. Coupled with an estimate of the radius of the broad-line region (BLR) this yields the black hole mass M{sub BH}. The radius of the BLR may be inferred from an extrapolation of the correlation between source luminosity and reverberation-derived r{sub BLR} measures (the so-called Kaspi relation involving about 60 low-z sources). We are exploring a different method for estimating r{sub BLR}more » directly from inferred physical conditions in the BLR of each source. We report here on a comparison of r{sub BLR} estimates that come from our method and from reverberation mapping. Our ''photoionization'' method employs diagnostic line intensity ratios in the rest-frame range 1400-2000 A (Al III {lambda}1860/Si III] {lambda}1892, C IV {lambda}1549/Al III {lambda}1860) that enable derivation of the product of density and ionization parameter with the BLR distance derived from the definition of the ionization parameter. We find good agreement between our estimates of the density, ionization parameter, and r{sub BLR} and those from reverberation mapping. We suggest empirical corrections to improve the agreement between individual photoionization-derived r{sub BLR} values and those obtained from reverberation mapping. The results in this paper can be exploited to estimate M{sub BH} for large samples of high-z quasars using an appropriate virial broadening estimator. We show that the width of the UV intermediate emission lines are consistent with the width of H{beta}, thereby providing a reliable virial broadening estimator that can be measured in large samples of high-z quasars.« less
Unsupervised individual tree crown detection in high-resolution satellite imagery
Skurikhin, Alexei N.; McDowell, Nate G.; Middleton, Richard S.
2016-01-26
Rapidly and accurately detecting individual tree crowns in satellite imagery is a critical need for monitoring and characterizing forest resources. We present a two-stage semiautomated approach for detecting individual tree crowns using high spatial resolution (0.6 m) satellite imagery. First, active contours are used to recognize tree canopy areas in a normalized difference vegetation index image. Given the image areas corresponding to tree canopies, we then identify individual tree crowns as local extrema points in the Laplacian of Gaussian scale-space pyramid. The approach simultaneously detects tree crown centers and estimates tree crown sizes, parameters critical to multiple ecosystem models. Asmore » a demonstration, we used a ground validated, 0.6 m resolution QuickBird image of a sparse forest site. The two-stage approach produced a tree count estimate with an accuracy of 78% for a naturally regenerating forest with irregularly spaced trees, a success rate equivalent to or better than existing approaches. In addition, our approach detects tree canopy areas and individual tree crowns in an unsupervised manner and helps identify overlapping crowns. Furthermore, the method also demonstrates significant potential for further improvement.« less
Unsupervised individual tree crown detection in high-resolution satellite imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skurikhin, Alexei N.; McDowell, Nate G.; Middleton, Richard S.
Rapidly and accurately detecting individual tree crowns in satellite imagery is a critical need for monitoring and characterizing forest resources. We present a two-stage semiautomated approach for detecting individual tree crowns using high spatial resolution (0.6 m) satellite imagery. First, active contours are used to recognize tree canopy areas in a normalized difference vegetation index image. Given the image areas corresponding to tree canopies, we then identify individual tree crowns as local extrema points in the Laplacian of Gaussian scale-space pyramid. The approach simultaneously detects tree crown centers and estimates tree crown sizes, parameters critical to multiple ecosystem models. Asmore » a demonstration, we used a ground validated, 0.6 m resolution QuickBird image of a sparse forest site. The two-stage approach produced a tree count estimate with an accuracy of 78% for a naturally regenerating forest with irregularly spaced trees, a success rate equivalent to or better than existing approaches. In addition, our approach detects tree canopy areas and individual tree crowns in an unsupervised manner and helps identify overlapping crowns. Furthermore, the method also demonstrates significant potential for further improvement.« less
Big city, small world: density, contact rates, and transmission of dengue across Pakistan.
Kraemer, M U G; Perkins, T A; Cummings, D A T; Zakar, R; Hay, S I; Smith, D L; Reiner, R C
2015-10-06
Macroscopic descriptions of populations commonly assume that encounters between individuals are well mixed; i.e. each individual has an equal chance of coming into contact with any other individual. Relaxing this assumption can be challenging though, due to the difficulty of acquiring detailed knowledge about the non-random nature of encounters. Here, we fitted a mathematical model of dengue virus transmission to spatial time-series data from Pakistan and compared maximum-likelihood estimates of 'mixing parameters' when disaggregating data across an urban-rural gradient. We show that dynamics across this gradient are subject not only to differing transmission intensities but also to differing strengths of nonlinearity due to differences in mixing. Accounting for differences in mobility by incorporating two fine-scale, density-dependent covariate layers eliminates differences in mixing but results in a doubling of the estimated transmission potential of the large urban district of Lahore. We furthermore show that neglecting spatial variation in mixing can lead to substantial underestimates of the level of effort needed to control a pathogen with vaccines or other interventions. We complement this analysis with estimates of the relationships between dengue transmission intensity and other putative environmental drivers thereof. © 2015 The Authors.
Li, Shi; Mukherjee, Bhramar; Taylor, Jeremy M G; Rice, Kenneth M; Wen, Xiaoquan; Rice, John D; Stringham, Heather M; Boehnke, Michael
2014-07-01
With challenges in data harmonization and environmental heterogeneity across various data sources, meta-analysis of gene-environment interaction studies can often involve subtle statistical issues. In this paper, we study the effect of environmental covariate heterogeneity (within and between cohorts) on two approaches for fixed-effect meta-analysis: the standard inverse-variance weighted meta-analysis and a meta-regression approach. Akin to the results in Simmonds and Higgins (), we obtain analytic efficiency results for both methods under certain assumptions. The relative efficiency of the two methods depends on the ratio of within versus between cohort variability of the environmental covariate. We propose to use an adaptively weighted estimator (AWE), between meta-analysis and meta-regression, for the interaction parameter. The AWE retains full efficiency of the joint analysis using individual level data under certain natural assumptions. Lin and Zeng (2010a, b) showed that a multivariate inverse-variance weighted estimator retains full efficiency as joint analysis using individual level data, if the estimates with full covariance matrices for all the common parameters are pooled across all studies. We show consistency of our work with Lin and Zeng (2010a, b). Without sacrificing much efficiency, the AWE uses only univariate summary statistics from each study, and bypasses issues with sharing individual level data or full covariance matrices across studies. We compare the performance of the methods both analytically and numerically. The methods are illustrated through meta-analysis of interaction between Single Nucleotide Polymorphisms in FTO gene and body mass index on high-density lipoprotein cholesterol data from a set of eight studies of type 2 diabetes. © 2014 WILEY PERIODICALS, INC.
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
Mathematical modeling of diphtheria transmission in Thailand.
Sornbundit, Kan; Triampo, Wannapong; Modchang, Charin
2017-08-01
In this work, a mathematical model for describing diphtheria transmission in Thailand is proposed. Based on the course of diphtheria infection, the population is divided into 8 epidemiological classes, namely, susceptible, symptomatic infectious, asymptomatic infectious, carrier with full natural-acquired immunity, carrier with partial natural-acquired immunity, individual with full vaccine-induced immunity, and individual with partial vaccine-induced immunity. Parameter values in the model were either directly obtained from the literature, estimated from available data, or estimated by means of sensitivity analysis. Numerical solutions show that our model can correctly describe the decreasing trend of diphtheria cases in Thailand during the years 1977-2014. Furthermore, despite Thailand having high DTP vaccine coverage, our model predicts that there will be diphtheria outbreaks after the year 2014 due to waning immunity. Our model also suggests that providing booster doses to some susceptible individuals and those with partial immunity every 10 years is a potential way to inhibit future diphtheria outbreaks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Excel-Based Tool for Pharmacokinetically Guided Dose Adjustment of Paclitaxel.
Kraff, Stefanie; Lindauer, Andreas; Joerger, Markus; Salamone, Salvatore J; Jaehde, Ulrich
2015-12-01
Neutropenia is a frequent and severe adverse event in patients receiving paclitaxel chemotherapy. The time above a paclitaxel threshold concentration of 0.05 μmol/L (Tc > 0.05 μmol/L) is a strong predictor for paclitaxel-associated neutropenia and has been proposed as a target pharmacokinetic (PK) parameter for paclitaxel therapeutic drug monitoring and dose adaptation. Up to now, individual Tc > 0.05 μmol/L values are estimated based on a published PK model of paclitaxel by using the software NONMEM. Because many clinicians are not familiar with the use of NONMEM, an Excel-based dosing tool was developed to allow calculation of paclitaxel Tc > 0.05 μmol/L and give clinicians an easy-to-use tool. Population PK parameters of paclitaxel were taken from a published PK model. An Alglib VBA code was implemented in Excel 2007 to compute differential equations for the paclitaxel PK model. Maximum a posteriori Bayesian estimates of the PK parameters were determined with the Excel Solver using individual drug concentrations. Concentrations from 250 patients were simulated receiving 1 cycle of paclitaxel chemotherapy. Predictions of paclitaxel Tc > 0.05 μmol/L as calculated by the Excel tool were compared with NONMEM, whereby maximum a posteriori Bayesian estimates were obtained using the POSTHOC function. There was a good concordance and comparable predictive performance between Excel and NONMEM regarding predicted paclitaxel plasma concentrations and Tc > 0.05 μmol/L values. Tc > 0.05 μmol/L had a maximum bias of 3% and an error on precision of <12%. The median relative deviation of the estimated Tc > 0.05 μmol/L values between both programs was 1%. The Excel-based tool can estimate the time above a paclitaxel threshold concentration of 0.05 μmol/L with acceptable accuracy and precision. The presented Excel tool allows reliable calculation of paclitaxel Tc > 0.05 μmol/L and thus allows target concentration intervention to improve the benefit-risk ratio of the drug. The easy use facilitates therapeutic drug monitoring in clinical routine.
Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R
2012-09-10
A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.
Dental Age Estimation Helps Create a New Identity.
De Angelis, Danilo; Gibelli, Daniele; Fabbri, Paolo; Cattaneo, Cristina
2015-09-01
Age estimation involves the reconstruction of age by biological parameters such as skeletal and dental development in minors, or reduction of pulp chamber in adults, to gain indications concerning the chronological age of the person. In most cases, it is needed in forensic scenarios to verify if the supposed age of an individual is correct; in exceptional cases, age estimation is instead required by judicial authorities to create a new identity usually in persons who do not remember who they are.This article aims at reporting the case of J. who was found in 2005 with signs of amnesia because he did not remember his name and age. After several unsuccessful attempts at identifying him, the judicial authority decided to assign a new identity, which was to be constructed according to the real biological data of the individual. The help of a forensic pathologist and a forensic odontologist was then requested, and age estimation was reached by applying methods for adults based on the physiological reduction of pulp chamber. Dental age estimation yielded a final result of approximately 31 years, which was the new age assigned to the person.This article shows a peculiar application of dental age estimation, which can be used not only to ascertain or deny supposed age, but is sometimes needed to create a new identity.
Aiassa, E; Higgins, J P T; Frampton, G K; Greiner, M; Afonso, A; Amzal, B; Deeks, J; Dorne, J-L; Glanville, J; Lövei, G L; Nienstedt, K; O'connor, A M; Pullin, A S; Rajić, A; Verloo, D
2015-01-01
Food and feed safety risk assessment uses multi-parameter models to evaluate the likelihood of adverse events associated with exposure to hazards in human health, plant health, animal health, animal welfare, and the environment. Systematic review and meta-analysis are established methods for answering questions in health care, and can be implemented to minimize biases in food and feed safety risk assessment. However, no methodological frameworks exist for refining risk assessment multi-parameter models into questions suitable for systematic review, and use of meta-analysis to estimate all parameters required by a risk model may not be always feasible. This paper describes novel approaches for determining question suitability and for prioritizing questions for systematic review in this area. Risk assessment questions that aim to estimate a parameter are likely to be suitable for systematic review. Such questions can be structured by their "key elements" [e.g., for intervention questions, the population(s), intervention(s), comparator(s), and outcome(s)]. Prioritization of questions to be addressed by systematic review relies on the likely impact and related uncertainty of individual parameters in the risk model. This approach to planning and prioritizing systematic review seems to have useful implications for producing evidence-based food and feed safety risk assessment.
Di Lellis, Maddalena A; Seifan, Merav; Troschinski, Sandra; Mazzia, Christophe; Capowiez, Yvan; Triebskorn, Rita; Köhler, Heinz-R
2012-11-01
Ectotherms from sunny and hot environments need to cope with solar radiation. Mediterranean land snails of the superfamily Helicoidea feature a behavioural strategy to escape from solar radiation-induced excessive soil heating by climbing up vertical objects. The height of climbing, and also other parameters like shell colouration pattern, shell orientation, shell size, body mass, actual internal and shell surface temperature, and the interactions between those factors may be expected to modulate proteotoxic effects in snails exposed to solar radiation and, thus, their stress response. Focussing on natural populations of Xeropicta derbentina, we conducted a 'snapshot' field study using the individual Hsp70 level as a proxy for proteotoxic stress. In addition to correlation analyses, an IT-model selection approach based on Akaike's Information Criterion was applied to evaluate a set of models with respect to their explanatory power and to assess the relevance of each of the above-mentioned parameters for individual stress, by model averaging and parameter estimation. The analysis revealed particular importance of the individuals' shell size, height above ground, the shell colouration pattern and the interaction height × orientation. Our study showed that a distinct set of behavioural traits and intrinsic characters define the Hsp70 level and that environmental factors and individual features strongly interact.
Contact networks and the study of contagion.
Hartigan, P M
1980-09-01
The contact network among individuals in a patient group and in a control group is examined. The probability of knowing another person is modelled with parameters assigned to various factors, such as age, sex or disease, which may influence this probability. Standard likelihood techniques are used to estimate the parameters and to test the significance of the hypotheses, in particular the hypothesis of contagion, generated in the modelling process. The method is illustrated in a study of the Yale student body, in which infectious mononucleosis patients of the opposite sex are shown to know each other significantly more frequently than expected.
Estimating frame bulk and shear moduli of two double porosity layers by ultrasound transmission.
Bai, Ruonan; Tinel, Alain; Alem, Abdellah; Franklin, Hervé; Wang, Huaqing
2016-08-01
The acoustic plane wave transmission by water saturated double porosity media is investigated. Two samples of double porosity media assumed to obey Berryman and Wang (BW) extension (Berryman and Wang, 1995, 2000) of Biot's theory in the low frequency regime are under consideration: ROBU® (pure binder-free borosilicate glass 3.3 manufactured to form the individual grains) and Tobermorite 11Å (the individual porous cement grains show irregular shapes). The de facto gap existing between theoretical and experimental data can be minimized by modifying adequately two of the parameters estimated from triaxial tests: the frame bulk and shear moduli. The frequency dependent imaginary parts that follow necessary from the minimization are in relation with the energy losses due to contact relaxation and friction between grains. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Buntoung, Sumaman; Janjai, Serm; Nunez, Manuel; Choosri, Pranomkorn; Pratummasoot, Noppamas; Chiwpreecha, Kulanist
2014-11-01
Factors affecting the ratio of erythemal UV (UVER) to broadband (G) irradiance were investigated in this study. Data from four solar monitoring sites in Thailand, namely Chiang Mai, Ubon Ratchathani, Nakhon Pathom and Songkhla were used to investigate the UVER/G ratio in response to geometric and atmospheric parameters. These comprised solar zenith angle, aerosol load, total ozone column, precipitable water and clearness index. A modeling scheme was developed to isolate and examine the effect of each individual environmental parameter on the ratio. Results showed that all parameters with the exception of solar zenith angle and clearness index influenced the ratios in a linear manner. These results were also used to develop a semi-empirical model for estimating hourly erythemal UV irradiance. Data from 2009 to 2010 were used to construct the ratio model while validation was performed using erythemal UV irradiance at the above four sites in 2011. The validation results showed reasonable agreement with a root mean square difference of 13.5% and mean bias difference of - 0.5%, under all sky conditions and 10.9% and - 0.3%, respectively, under cloudless conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
A Model Based Approach to Sample Size Estimation in Recent Onset Type 1 Diabetes
Bundy, Brian; Krischer, Jeffrey P.
2016-01-01
The area under the curve C-peptide following a 2-hour mixed meal tolerance test from 481 individuals enrolled on 5 prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrollment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in Observed vs. Expected calculations to estimate the presumption of benefit in ongoing trials. PMID:26991448
Popovic-Maneski, Lana; Aleksic, Antonina; Metani, Amine; Bergeron, Vance; Cobeljic, Radoje; Popovic, Dejan B
2018-01-01
Increased muscle tone and exaggerated tendon reflexes characterize most of the individuals after a spinal cord injury (SCI). We estimated seven parameters from the pendulum test and used them to compare with the Ashworth modified scale of spasticity grades in three populations (retrospective study) to assess their spasticity. Three ASIA B SCI patients who exercised on a stationary FES bicycle formed group F, six ASIA B SCI patients who received only conventional therapy were in the group C, and six healthy individuals constituted the group H. The parameters from the pendulum test were used to form a single measure, termed the PT score, for each subject. The pendulum test parameters show differences between the F and C groups, but not between the F and H groups, however, statistical significance was limited due to the small study size. Results show a small deviation from the mean for all parameters in the F group and substantial deviations from the mean for the parameters in the C group. PT scores show significant differences between the F and C groups and the C and H groups and no differences between the F and C groups. The correlation between the PT score and Ashworth score was 0.88.
Chen, Qian; Shou, Weiling; Wu, Wei; Guo, Ye; Zhang, Yujuan; Huang, Chunmei; Cui, Wei
2015-04-01
To accurately estimate longitudinal changes in individuals, it is important to take into consideration the biological variability of the measurement. The few studies available on the biological variations of coagulation parameters are mostly outdated. We confirmed the published results using modern, fully automated methods. Furthermore, we added data for additional coagulation parameters. At 8:00 am, 12:00 pm, and 4:00 pm on days 1, 3, and 5, venous blood was collected from 31 healthy volunteers. A total of 16 parameters related to coagulation screening tests as well as the activity of coagulation factors were analyzed; these included prothrombin time, fibrinogen (Fbg), activated partial thromboplastin time, thrombin time, international normalized ratio, prothrombin time activity, activated partial thromboplastin time ratio, fibrin(-ogen) degradation products, as well as the activity of factor II, factor V, factor VII, factor VIII, factor IX, and factor X. All intraindividual coefficients of variation (CVI) values for the parameters of the screening tests (except Fbg) were less than 5%. Conversely, the CVI values for the activity of coagulation factors were all greater than 5%. In addition, we calculated the reference change value to determine whether a significant difference exists between two test results from the same individual. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Kendall, W.L.; Nichols, J.D.; Hines, J.E.
1997-01-01
Statistical inference for capture-recapture studies of open animal populations typically relies on the assumption that all emigration from the studied population is permanent. However, there are many instances in which this assumption is unlikely to be met. We define two general models for the process of temporary emigration, completely random and Markovian. We then consider effects of these two types of temporary emigration on Jolly-Seber (Seber 1982) estimators and on estimators arising from the full-likelihood approach of Kendall et al. (1995) to robust design data. Capture-recapture data arising from Pollock's (1982) robust design provide the basis for obtaining unbiased estimates of demographic parameters in the presence of temporary emigration and for estimating the probability of temporary emigration. We present a likelihood-based approach to dealing with temporary emigration that permits estimation under different models of temporary emigration and yields tests for completely random and Markovian emigration. In addition, we use the relationship between capture probability estimates based on closed and open models under completely random temporary emigration to derive three ad hoc estimators for the probability of temporary emigration, two of which should be especially useful in situations where capture probabilities are heterogeneous among individual animals. Ad hoc and full-likelihood estimators are illustrated for small mammal capture-recapture data sets. We believe that these models and estimators will be useful for testing hypotheses about the process of temporary emigration, for estimating demographic parameters in the presence of temporary emigration, and for estimating probabilities of temporary emigration. These latter estimates are frequently of ecological interest as indicators of animal movement and, in some sampling situations, as direct estimates of breeding probabilities and proportions.
Forecasting peaks of seasonal influenza epidemics.
Nsoesie, Elaine; Mararthe, Madhav; Brownstein, John
2013-06-21
We present a framework for near real-time forecast of influenza epidemics using a simulation optimization approach. The method combines an individual-based model and a simple root finding optimization method for parameter estimation and forecasting. In this study, retrospective forecasts were generated for seasonal influenza epidemics using web-based estimates of influenza activity from Google Flu Trends for 2004-2005, 2007-2008 and 2012-2013 flu seasons. In some cases, the peak could be forecasted 5-6 weeks ahead. This study adds to existing resources for influenza forecasting and the proposed method can be used in conjunction with other approaches in an ensemble framework.
Nelson, Chase W; Moncla, Louise H; Hughes, Austin L
2015-11-15
New applications of next-generation sequencing technologies use pools of DNA from multiple individuals to estimate population genetic parameters. However, no publicly available tools exist to analyse single-nucleotide polymorphism (SNP) calling results directly for evolutionary parameters important in detecting natural selection, including nucleotide diversity and gene diversity. We have developed SNPGenie to fill this gap. The user submits a FASTA reference sequence(s), a Gene Transfer Format (.GTF) file with CDS information and a SNP report(s) in an increasing selection of formats. The program estimates nucleotide diversity, distance from the reference and gene diversity. Sites are flagged for multiple overlapping reading frames, and are categorized by polymorphism type: nonsynonymous, synonymous, or ambiguous. The results allow single nucleotide, single codon, sliding window, whole gene and whole genome/population analyses that aid in the detection of positive and purifying natural selection in the source population. SNPGenie version 1.2 is a Perl program with no additional dependencies. It is free, open-source, and available for download at https://github.com/hugheslab/snpgenie. nelsoncw@email.sc.edu or austin@biol.sc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Results From F-18B Stability and Control Parameter Estimation Flight Tests at High Dynamic Pressures
NASA Technical Reports Server (NTRS)
Moes, Timothy R.; Noffz, Gregory K.; Iliff, Kenneth W.
2000-01-01
A maximum-likelihood output-error parameter estimation technique has been used to obtain stability and control derivatives for the NASA F-18B Systems Research Aircraft. This work has been performed to support flight testing of the active aeroelastic wing (AAW) F-18A project. The goal of this research is to obtain baseline F-18 stability and control derivatives that will form the foundation of the aerodynamic model for the AAW aircraft configuration. Flight data have been obtained at Mach numbers between 0.85 and 1.30 and at dynamic pressures ranging between 600 and 1500 lbf/sq ft. At each test condition, longitudinal and lateral-directional doublets have been performed using an automated onboard excitation system. The doublet maneuver consists of a series of single-surface inputs so that individual control-surface motions cannot be correlated with other control-surface motions. Flight test results have shown that several stability and control derivatives are significantly different than prescribed by the F-18B aerodynamic model. This report defines the parameter estimation technique used, presents stability and control derivative results, compares the results with predictions based on the current F-18B aerodynamic model, and shows improvements to the nonlinear simulation using updated derivatives from this research.
Capturing Context-Related Change in Emotional Dynamics via Fixed Moderated Time Series Analysis.
Adolf, Janne K; Voelkle, Manuel C; Brose, Annette; Schmiedek, Florian
2017-01-01
Much of recent affect research relies on intensive longitudinal studies to assess daily emotional experiences. The resulting data are analyzed with dynamic models to capture regulatory processes involved in emotional functioning. Daily contexts, however, are commonly ignored. This may not only result in biased parameter estimates and wrong conclusions, but also ignores the opportunity to investigate contextual effects on emotional dynamics. With fixed moderated time series analysis, we present an approach that resolves this problem by estimating context-dependent change in dynamic parameters in single-subject time series models. The approach examines parameter changes of known shape and thus addresses the problem of observed intra-individual heterogeneity (e.g., changes in emotional dynamics due to observed changes in daily stress). In comparison to existing approaches to unobserved heterogeneity, model estimation is facilitated and different forms of change can readily be accommodated. We demonstrate the approach's viability given relatively short time series by means of a simulation study. In addition, we present an empirical application, targeting the joint dynamics of affect and stress and how these co-vary with daily events. We discuss potentials and limitations of the approach and close with an outlook on the broader implications for understanding emotional adaption and development.
Stoeger, Angela S.; Zeppelzauer, Matthias; Baotic, Anton
2015-01-01
Animal vocal signals are increasingly used to monitor wildlife populations and to obtain estimates of species occurrence and abundance. In the future, acoustic monitoring should function not only to detect animals, but also to extract detailed information about populations by discriminating sexes, age groups, social or kin groups, and potentially individuals. Here we show that it is possible to estimate age groups of African elephants (Loxodonta africana) based on acoustic parameters extracted from rumbles recorded under field conditions in a National Park in South Africa. Statistical models reached up to 70 % correct classification to four age groups (infants, calves, juveniles, adults) and 95 % correct classification when categorising into two groups (infants/calves lumped into one group versus adults). The models revealed that parameters representing absolute frequency values have the most discriminative power. Comparable classification results were obtained by fully automated classification of rumbles by high-dimensional features that represent the entire spectral envelope, such as MFCC (75 % correct classification) and GFCC (74 % correct classification). The reported results and methods provide the scientific foundation for a future system that could potentially automatically estimate the demography of an acoustically monitored elephant group or population. PMID:25821348
Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y
2014-01-01
Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.
Harmsen, Bart J; Foster, Rebecca J; Sanchez, Emma; Gutierrez-González, Carmina E; Silver, Scott C; Ostro, Linde E T; Kelly, Marcella J; Kay, Elma; Quigley, Howard
2017-01-01
In this study, we estimate life history parameters and abundance for a protected jaguar population using camera-trap data from a 14-year monitoring program (2002-2015) in Belize, Central America. We investigated the dynamics of this jaguar population using 3,075 detection events of 105 individual adult jaguars. Using robust design open population models, we estimated apparent survival and temporary emigration and investigated individual heterogeneity in detection rates across years. Survival probability was high and constant among the years for both sexes (φ = 0.78), and the maximum (conservative) age recorded was 14 years. Temporary emigration rate for the population was random, but constant through time at 0.20 per year. Detection probability varied between sexes, and among years and individuals. Heterogeneity in detection took the form of a dichotomy for males: those with consistently high detection rates, and those with low, sporadic detection rates, suggesting a relatively stable population of 'residents' consistently present and a fluctuating layer of 'transients'. Female detection was always low and sporadic. On average, twice as many males than females were detected per survey, and individual detection rates were significantly higher for males. We attribute sex-based differences in detection to biases resulting from social variation in trail-walking behaviour. The number of individual females detected increased when the survey period was extended from 3 months to a full year. Due to the low detection rates of females and the variable 'transient' male subpopulation, annual abundance estimates based on 3-month surveys had low precision. To estimate survival and monitor population changes in elusive, wide-ranging, low-density species, we recommend repeated surveys over multiple years; and suggest that continuous monitoring over multiple years yields even further insight into population dynamics of elusive predator populations.
A Bayesian state-space formulation of dynamic occupancy models
Royle, J. Andrew; Kery, M.
2007-01-01
Species occurrence and its dynamic components, extinction and colonization probabilities, are focal quantities in biogeography and metapopulation biology, and for species conservation assessments. It has been increasingly appreciated that these parameters must be estimated separately from detection probability to avoid the biases induced by nondetection error. Hence, there is now considerable theoretical and practical interest in dynamic occupancy models that contain explicit representations of metapopulation dynamics such as extinction, colonization, and turnover as well as growth rates. We describe a hierarchical parameterization of these models that is analogous to the state-space formulation of models in time series, where the model is represented by two components, one for the partially observable occupancy process and another for the observations conditional on that process. This parameterization naturally allows estimation of all parameters of the conventional approach to occupancy models, but in addition, yields great flexibility and extensibility, e.g., to modeling heterogeneity or latent structure in model parameters. We also highlight the important distinction between population and finite sample inference; the latter yields much more precise estimates for the particular sample at hand. Finite sample estimates can easily be obtained using the state-space representation of the model but are difficult to obtain under the conventional approach of likelihood-based estimation. We use R and Win BUGS to apply the model to two examples. In a standard analysis for the European Crossbill in a large Swiss monitoring program, we fit a model with year-specific parameters. Estimates of the dynamic parameters varied greatly among years, highlighting the irruptive population dynamics of that species. In the second example, we analyze route occupancy of Cerulean Warblers in the North American Breeding Bird Survey (BBS) using a model allowing for site-specific heterogeneity in model parameters. The results indicate relatively low turnover and a stable distribution of Cerulean Warblers which is in contrast to analyses of counts of individuals from the same survey that indicate important declines. This discrepancy illustrates the inertia in occupancy relative to actual abundance. Furthermore, the model reveals a declining patch survival probability, and increasing turnover, toward the edge of the range of the species, which is consistent with metapopulation perspectives on the genesis of range edges. Given detection/non-detection data, dynamic occupancy models as described here have considerable potential for the study of distributions and range dynamics.
J.C.G. Goelz; Thomas E. Burk; Shepard M. Zedaker
1999-01-01
Cross-sectional area growth and height growth of Fraser fir and red spruce trees growing in Virginia and North Carolina were analyzed to identify possible long-term growth trends. Cross-sectional area growth provided no evidence of growth decline. The individual discs were classified according to parameter estimates of the growth trend equation. The predominant pattern...
Individual tree basal-area growth parameter estimates for four models
J.J. Colbert; Michael Schuckers; Desta Fekedulegn; James Rentch; Mairtin MacSiurtain; Kurt Gottschalk
2004-01-01
Four sigmoid growth models are fit to basal-area data derived from increment cores and disks taken at breast height from oak trees. Models are rated on their ability to fit growth data from five datasets that are obtained from 10 locations along a longitudinal gradient across the states of Delaware, Pennsylvania, West Virginia, and Ohio in the USA. We examine and...
Voxel-Based 3-D Tree Modeling from Lidar Images for Extracting Tree Structual Information
NASA Astrophysics Data System (ADS)
Hosoi, F.
2014-12-01
Recently, lidar (light detection and ranging) has been used to extracting tree structural information. Portable scanning lidar systems can capture the complex shape of individual trees as a 3-D point-cloud image. 3-D tree models reproduced from the lidar-derived 3-D image can be used to estimate tree structural parameters. We have proposed the voxel-based 3-D modeling for extracting tree structural parameters. One of the tree parameters derived from the voxel modeling is leaf area density (LAD). We refer to the method as the voxel-based canopy profiling (VCP) method. In this method, several measurement points surrounding the canopy and optimally inclined laser beams are adopted for full laser beam illumination of whole canopy up to the internal. From obtained lidar image, the 3-D information is reproduced as the voxel attributes in the 3-D voxel array. Based on the voxel attributes, contact frequency of laser beams on leaves is computed and LAD in each horizontal layer is obtained. This method offered accurate LAD estimation for individual trees and woody canopy trees. For more accurate LAD estimation, the voxel model was constructed by combining airborne and portable ground-based lidar data. The profiles obtained by the two types of lidar complemented each other, thus eliminating blind regions and yielding more accurate LAD profiles than could be obtained by using each type of lidar alone. Based on the estimation results, we proposed an index named laser beam coverage index, Ω, which relates to the lidar's laser beam settings and a laser beam attenuation factor. It was shown that this index can be used for adjusting measurement set-up of lidar systems and also used for explaining the LAD estimation error using different types of lidar systems. Moreover, we proposed a method to estimate woody material volume as another application of the voxel tree modeling. In this method, voxel solid model of a target tree was produced from the lidar image, which is composed of consecutive voxels that filled the outer surface and the interior of the stem and large branches. From the model, the woody material volume of any part of the target tree can be directly calculated easily by counting the number of corresponding voxels and multiplying the result by the per-voxel volume.
Ecological risk assessment of aerial insectivores of the Clinch River/Poplar Creek system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baron, L.A.; Sample, B.E.
Risks to aerial insectivores (species that consume flying insects; rough-winged swallows, little brown bats, and endangered gray bats) were assessed for the CERCLA remedial investigation of the Clinch River/Poplar Creek system. Adult mayflies and sediment were collected from four locations and analyzed for contaminants. Sediment-to-mayfly contaminant transfer factors were generated from these data and used to estimate contaminant concentrations in mayflies from thirteen additional locations. Contaminants of potential concern (COPCs) were identified by comparing exposure estimates, generated using point estimates of parameter values, to NOAELS. COPCs included mercury, arsenic, and PCBs. Exposure to COPCs was re-estimated using Monte Carlo simulations.more » Adverse population effects were assumed likely if > 20% of the estimated exposure distribution was greater than the LOAEL. Exposure of swallows to mercury was a significant risk at two locations. Exposure of bats to mercury was a significant risk at only one location. While consideration of movement and foraging territory did not reduce estimated risks to swallows, when exposures for gray and little brown bats were re-estimated, population-level risks from mercury were no longer considered likely. As an endangered species however, protection is extended to individual gray bats. While less than 20% of the mercury exposure distribution for gray bats was > LOAEL, > 99% of the distribution was >NOAEL. Therefore, adverse effects may occur among maximally exposed individual gray bats. Available data indicate that contaminants in Poplar Creek are likely to present a risk to the swallow population, do not present a risk to the little brown bat population, and may present a risk to individual gray bats.« less
Source positions from VLBI combined solution
NASA Astrophysics Data System (ADS)
Bachmann, S.; Thaller, D.; Engelhardt, G.
2014-12-01
The IVS Combination Center at BKG is primarily responsible for combined Earth Orientation Parameter (EOP) products and the generation of a terrestrial reference frame based on VLBI observations (VTRF). The procedure is based on the combination of normal equations provided by six IVS Analysis Centers (AC). Since more and more ACs also provide source positions in the normal equations - beside EOPs and station coordinates - an estimation of these parameters is possible and should be investigated. In the past, the International Celestial Reference Frame (ICRF) was not generated as a combined solution from several individual solutions, but was based on a single solution provided by one AC. The presentation will give an overview on the combination strategy and the possibilities for combined source position determination. This includes comparisons with existing catalogs, quality estimation and possibilities of rigorous combination of EOP, TRF and CRF in one combination process.
Laitner, John; Silverman, Dan
2012-01-01
This paper proposes and analyzes a Social Security reform in which individuals no longer face the OASI payroll tax after, say, age 54 or a career of 34 years, and their subsequent earnings have no bearing on their benefits. We first estimate parameters of a life–cycle model. Our specification includes non-separable preferences and possible disability. It predicts a consumption–expenditure change at retirement. We use the magnitude of the expenditure change, together with households’ retirement–age decisions, to identify key structural parameters. The estimated magnitude of the change in consumption–expenditure depends importantly on the treatment of consumption by adult children of the household. Simulations indicate that the reform could increase retirement ages one year or more, equivalent variations could average more than $4,000 per household, and income tax revenues per household could increase by more than $14,000. PMID:23729902
NASA Astrophysics Data System (ADS)
Nora, R.; Field, J. E.; Peterson, J. Luc; Spears, B.; Kruse, M.; Humbird, K.; Gaffney, J.; Springer, P. T.; Brandon, S.; Langer, S.
2017-10-01
We present an experimentally corroborated hydrodynamic extrapolation of several recent BigFoot implosions on the National Ignition Facility. An estimate on the value and error of the hydrodynamic scale necessary for ignition (for each individual BigFoot implosion) is found by hydrodynamically scaling a distribution of multi-dimensional HYDRA simulations whose outputs correspond to their experimental observables. The 11-parameter database of simulations, which include arbitrary drive asymmetries, dopant fractions, hydrodynamic scaling parameters, and surface perturbations due to surrogate tent and fill-tube engineering features, was computed on the TRINITY supercomputer at Los Alamos National Laboratory. This simple extrapolation is the first step in providing a rigorous calibration of our workflow to provide an accurate estimate of the efficacy of achieving ignition on the National Ignition Facility. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E
2007-09-01
Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.
Lobach, Iryna; Mallick, Bani; Carroll, Raymond J
2011-01-01
Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.
Modenese, Luca; Montefiori, Erica; Wang, Anqi; Wesarg, Stefan; Viceconti, Marco; Mazzà, Claudia
2018-05-17
The generation of subject-specific musculoskeletal models of the lower limb has become a feasible task thanks to improvements in medical imaging technology and musculoskeletal modelling software. Nevertheless, clinical use of these models in paediatric applications is still limited for what concerns the estimation of muscle and joint contact forces. Aiming to improve the current state of the art, a methodology to generate highly personalized subject-specific musculoskeletal models of the lower limb based on magnetic resonance imaging (MRI) scans was codified as a step-by-step procedure and applied to data from eight juvenile individuals. The generated musculoskeletal models were used to simulate 107 gait trials using stereophotogrammetric and force platform data as input. To ensure completeness of the modelling procedure, muscles' architecture needs to be estimated. Four methods to estimate muscles' maximum isometric force and two methods to estimate musculotendon parameters (optimal fiber length and tendon slack length) were assessed and compared, in order to quantify their influence on the models' output. Reported results represent the first comprehensive subject-specific model-based characterization of juvenile gait biomechanics, including profiles of joint kinematics and kinetics, muscle forces and joint contact forces. Our findings suggest that, when musculotendon parameters were linearly scaled from a reference model and the muscle force-length-velocity relationship was accounted for in the simulations, realistic knee contact forces could be estimated and these forces were not sensitive the method used to compute muscle maximum isometric force. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Fedotov, V D; Maslov, A G; Lobkaeva, E P; Krylov, V N; Obukhova, E O
2012-01-01
A new approach is proposed for the choice of low-frequency magnetic therapy on an individual basis using the results of analysis of heart rhythm variability. The clinical efficiency of low-frequency magnetic therapy incorporated in the combined treatment of 65 patients aged between 25 and 45 years with essential arterial hypertension was estimated. The statistically significant positive effects of the treatment included normalization of blood pressure and characteristics of heart rhythm variability as well as resolution of clinical symptoms of vegetative dysregulation.
NASA Astrophysics Data System (ADS)
Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino
2017-04-01
Investigations on Palaeonummulites venosus using the natural laboratory approach for determining chamber building rate, test diameter increase rate, reproduction time and longevity is based on the decomposition of monthly obtained frequency distributions based on chamber number and test diameter into normal-distributed components. The shift of the component parameters 'mean' and 'standard deviation' during the investigation period of 15 months was used to calculate Michaelis-Menten functions applied to estimate the averaged chamber building rate and diameter increase rate under natural conditions. The individual dates of birth were estimated using the inverse averaged chamber building rate and the inverse diameter increase rate fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e. frequency divided by sediment weight) based on chamber building rate and diameter increase rate resulted both in a continuous reproduction through the year with two peaks, the stronger in May /June determined as the beginning of the summer generation (generation1) and the weaker in November determined as the beginning of the winter generation (generation 2). This reproduction scheme explains the existence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date seems to be round about one year, obtained by both estimations based on the chamber building rate and the diameter increase rate.
NASA Astrophysics Data System (ADS)
Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino
2017-12-01
We investigated the symbiont-bearing benthic foraminifer Palaeonummulites venosus to determine the chamber building rate (CBR), test diameter increase rate (DIR), reproduction time and longevity using the `natural laboratory' approach. This is based on the decomposition of monthly obtained frequency distributions of chamber number and test diameter into normally distributed components. Test measurements were taken using MicroCT. The shift of the mean and standard deviation of component parameters during the 15-month investigation period was used to calculate Michaelis-Menten functions applied to estimate the averaged CBR and DIR under natural conditions. The individual dates of birth were estimated using the inverse averaged CBR and the inverse DIR fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e., frequency divided by sediment weight) based on both CBR and DIR revealed continuous reproduction throughout the year with two peaks, a stronger one in June determined as the onset of the summer generation (generation 1) and a weaker one in November determined as the onset of the winter generation (generation 2). This reproduction scheme explains the presence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date, is approximately 1.5 yr, an estimation obtained by using both CBR and DIR.
Petersen, J.H.; DeAngelis, D.L.; Paukert, C.P.
2008-01-01
Many fish species are at risk to some degree, and conservation efforts are planned or underway to preserve sensitive populations. For many imperiled species, models could serve as useful tools for researchers and managers as they seek to understand individual growth, quantify predator-prey dynamics, and identify critical sources of mortality. Development and application of models for rare species however, has been constrained by small population sizes, difficulty in obtaining sampling permits, limited opportunities for funding, and regulations on how endangered species can be used in laboratory studies. Bioenergetic and life history models should help with endangered species-recovery planning since these types of models have been used successfully in the last 25 years to address management problems for many commercially and recreationally important fish species. In this paper we discuss five approaches to developing models and parameters for rare species. Borrowing model functions and parameters from related species is simple, but uncorroborated results can be misleading. Directly estimating parameters with laboratory studies may be possible for rare species that have locally abundant populations. Monte Carlo filtering can be used to estimate several parameters by means of performing simple laboratory growth experiments to first determine test criteria. Pattern-oriented modeling (POM) is a new and developing field of research that uses field-observed patterns to build, test, and parameterize models. Models developed using the POM approach are closely linked to field data, produce testable hypotheses, and require a close working relationship between modelers and empiricists. Artificial evolution in individual-based models can be used to gain insight into adaptive behaviors for poorly understood species and thus can fill in knowledge gaps. ?? Copyright by the American Fisheries Society 2008.
Measuring the quality of life in hypertension according to Item Response Theory
Borges, José Wicto Pereira; Moreira, Thereza Maria Magalhães; Schmitt, Jeovani; de Andrade, Dalton Francisco; Barbetta, Pedro Alberto; de Souza, Ana Célia Caetano; Lima, Daniele Braz da Silva; Carvalho, Irialda Saboia
2017-01-01
ABSTRACT OBJECTIVE To analyze the Miniquestionário de Qualidade de Vida em Hipertensão Arterial (MINICHAL – Mini-questionnaire of Quality of Life in Hypertension) using the Item Response Theory. METHODS This is an analytical study conducted with 712 persons with hypertension treated in thirteen primary health care units of Fortaleza, State of Ceará, Brazil, in 2015. The steps of the analysis by the Item Response Theory were: evaluation of dimensionality, estimation of parameters of items, and construction of scale. The study of dimensionality was carried out on the polychoric correlation matrix and confirmatory factor analysis. To estimate the item parameters, we used the Gradual Response Model of Samejima. The analyses were conducted using the free software R with the aid of psych and mirt. RESULTS The analysis has allowed the visualization of item parameters and their individual contributions in the measurement of the latent trait, generating more information and allowing the construction of a scale with an interpretative model that demonstrates the evolution of the worsening of the quality of life in five levels. Regarding the item parameters, the items related to the somatic state have had a good performance, as they have presented better power to discriminate individuals with worse quality of life. The items related to mental state have been those which contributed with less psychometric data in the MINICHAL. CONCLUSIONS We conclude that the instrument is suitable for the identification of the worsening of the quality of life in hypertension. The analysis of the MINICHAL using the Item Response Theory has allowed us to identify new sides of this instrument that have not yet been addressed in previous studies. PMID:28492764
Valsecchi, M G; Silvestri, D; Sasieni, P
1996-12-30
We consider methodological problems in evaluating long-term survival in clinical trials. In particular we examine the use of several methods that extend the basic Cox regression analysis. In the presence of a long term observation, the proportional hazard (PH) assumption may easily be violated and a few long term survivors may have a large effect on parameter estimates. We consider both model selection and robust estimation in a data set of 474 ovarian cancer patients enrolled in a clinical trial and followed for between 7 and 12 years after randomization. Two diagnostic plots for assessing goodness-of-fit are introduced. One shows the variation in time of parameter estimates and is an alternative to PH checking based on time-dependent covariates. The other takes advantage of the martingale residual process in time to represent the lack of fit with a metric of the type 'observed minus expected' number of events. Robust estimation is carried out by maximizing a weighted partial likelihood which downweights the contribution to estimation of influential observations. This type of complementary analysis of long-term results of clinical studies is useful in assessing the soundness of the conclusions on treatment effect. In the example analysed here, the difference in survival between treatments was mostly confined to those individuals who survived at least two years beyond randomization.
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.
Maas, Anne H; Rozendaal, Yvonne J W; van Pul, Carola; Hilbers, Peter A J; Cottaar, Ward J; Haak, Harm R; van Riel, Natal A W
2015-03-01
Current diabetes education methods are costly, time-consuming, and do not actively engage the patient. Here, we describe the development and verification of the physiological model for healthy subjects that forms the basis of the Eindhoven Diabetes Education Simulator (E-DES). E-DES shall provide diabetes patients with an individualized virtual practice environment incorporating the main factors that influence glycemic control: food, exercise, and medication. The physiological model consists of 4 compartments for which the inflow and outflow of glucose and insulin are calculated using 6 nonlinear coupled differential equations and 14 parameters. These parameters are estimated on 12 sets of oral glucose tolerance test (OGTT) data (226 healthy subjects) obtained from literature. The resulting parameter set is verified on 8 separate literature OGTT data sets (229 subjects). The model is considered verified if 95% of the glucose data points lie within an acceptance range of ±20% of the corresponding model value. All glucose data points of the verification data sets lie within the predefined acceptance range. Physiological processes represented in the model include insulin resistance and β-cell function. Adjusting the corresponding parameters allows to describe heterogeneity in the data and shows the capabilities of this model for individualization. We have verified the physiological model of the E-DES for healthy subjects. Heterogeneity of the data has successfully been modeled by adjusting the 4 parameters describing insulin resistance and β-cell function. Our model will form the basis of a simulator providing individualized education on glucose control. © 2014 Diabetes Technology Society.
Waples, R S
2016-10-01
The relationship between life-history traits and the key eco-evolutionary parameters effective population size (Ne) and Ne/N is revisited for iteroparous species with overlapping generations, with a focus on the annual rate of adult mortality (d). Analytical methods based on populations with arbitrarily long adult lifespans are used to evaluate the influence of d on Ne, Ne/N and the factors that determine these parameters: adult abundance (N), generation length (T), age at maturity (α), the ratio of variance to mean reproductive success in one season by individuals of the same age (φ) and lifetime variance in reproductive success of individuals in a cohort (Vk•). Although the resulting estimators of N, T and Vk• are upwardly biased for species with short adult lifespans, the estimate of Ne/N is largely unbiased because biases in T are compensated for by biases in Vk• and N. For the first time, the contrasting effects of T and Vk• on Ne and Ne/N are jointly considered with respect to d and φ. A simple function of d and α based on the assumption of constant vital rates is shown to be a robust predictor (R(2)=0.78) of Ne/N in an empirical data set of life tables for 63 animal and plant species with diverse life histories. Results presented here should provide important context for interpreting the surge of genetically based estimates of Ne that has been fueled by the genomics revolution.
Bastin, C; Soyeurt, H; Gengler, N
2013-04-01
The objective of this study was to estimate genetic parameters of milk, fat, and protein yields, fat and protein contents, somatic cell count, and 17 groups and individual milk fatty acid (FA) contents predicted by mid-infrared spectrometry for first-, second- and third-parity Holstein cows. Edited data included records collected in the Walloon region of Belgium from 37,768 cows in parity 1,22,566 cows in parity 2 and 8221 in parity 3. A total of 69 (23 traits for three parities) single-trait random regression animal test-day models were run. Approximate genetic correlations among traits were inferred from pairwise regressions among estimated breeding values of cow having observations. Heritability and genetic correlation estimates from this study reflected the origins of FA: de novo synthetized or originating from the diet and the body fat mobilization. Averaged daily heritabilities of FA contents in milk ranged between 0.18 and 0.47. Average daily genetic correlations (averaged across days in milk and parities) among groups and individual FA contents in milk ranged between 0.31 and 0.99. The genetic variability of FAs in combination with the moderate to high heritabilities indicated that FA contents in milk could be changed by genetic selection; however, desirable direction of change in these traits remains unclear and should be defined with respect to all issues of importance related to milk FA. © 2012 Blackwell Verlag GmbH.
Friedrich, Reinhard E.; Schmidt, Kirsten; Treszl, András; Kersten, Jan F.
2016-01-01
Introduction: Surgical procedures require informed patient consent, which is mandatory prior to any procedure. These requirements apply in particular to elective surgical procedures. The communication with the patient about the procedure has to be comprehensive and based on mutual understanding. Furthermore, the informed consent has to take into account whether a patient is of legal age. As a result of large-scale migration, there are eventually patients planned for medical procedures, whose chronological age can’t be assessed reliably by physical inspection alone. Age determination based on assessing wisdom tooth development stages can be used to help determining whether individuals involved in medical procedures are of legal age, i.e., responsible and accountable. At present, the assessment of wisdom tooth developmental stages barely allows a crude estimate of an individual’s age. This study explores possibilities for more precise predictions of the age of individuals with emphasis on the legal age threshold of 18 years. Material and Methods: 1,900 dental orthopantomograms (female 938, male 962, age: 15–24 years), taken between the years 2000 and 2013 for diagnosis and treatment of diseases of the jaws, were evaluated. 1,895 orthopantomograms (female 935, male 960) of 1,804 patients (female 872, male 932) met the inclusion criteria. The archives of the Department of Diagnostic Radiology in Dentistry, University Medical Center Hamburg-Eppendorf, and of an oral and maxillofacial office in Rostock, Germany, were used to collect a sufficient number of radiographs. An effort was made to achieve almost equal distribution of age categories in this study group; ‘age’ was given on a particular day. The radiological criteria of lower third molar investigation were: presence and extension of periodontal space, alveolar bone loss, emergence of tooth, and stage of tooth mineralization (according to Demirjian). Univariate and multivariate general linear models were calculated. Using hierarchical multivariate analyses a formula was derived quantifying the development of the four parameters of wisdom tooth over time. This model took repeated measurements of the same persons into account and is only applicable when a person is assessed a second time. The second approach investigates a linear regression model in order to predict the age. In a third approach, a classification and regression tree (CART) was developed to derive cut-off values for the four parameters, resulting in a classification with estimates for sensitivity and specificity. Results: No statistically significant differences were found between parameters related to wisdom tooth localization (right or left side). In univariate analyses being of legal age was associated with consecutive stages of wisdom tooth development, the obliteration of the periodontal space, and tooth emergence, as well with alveolar bone loss; no association was found with tooth mineralization. Multivariate models without repeated measurements revealed imprecise estimates because of the unknown individual-related variability. The precision of these models is thus not very good, although it improves with advancing age. When calculating a CART-analysis and a receiver operating characteristics – area under the curve of 78% was achieved; when maximizing both specificity and sensitivity, a Youden’s index of 47% was achieved (with 73% specificity and 74% sensitivity). Discussion: This study provides a basis to help determine whether a person is 18 years or older in individuals who are assumed to be between 15 and 24 years old. From repeated measurements, we found a linear effect of age on the four parameters in the individuals. However, this information can't be used for prognosis, because of the large intra-individual variability. Thus, although the development of the four parameters can be estimated over time, a direct conclusion with regard to age can’t be drawn from the parameters without previous biographic information about a person. While a single parameter is of limited value for calculating the target age of 18 years, combining several findings, that can be determined on a standard radiography, may potentially be a more reliable diagnostic tool for estimating the target age in both sexes. However, a high degree of precision can’t be achieved. The reason for persistent uncertainty lies in the wide chronological range of wisdom tooth development, which stretches from well below to above the 18th life year. The regression approach thus seems not optimal. Although sensitivity and specificity of the CART-model are moderately high, this model is still not reliable as a diagnostic tool. Our findings could have impact, e.g. on elective surgeries for young individuals with unknown biography. However, these results cannot replace social engagement, in particular thorough physical examination of patients and careful registration of their histories. Further studies on the use of this calculation method in different ethnic groups would be desirable. PMID:27975042
Antunes, Danielle M F; Kalmbach, Keri H; Wang, Fang; Dracxler, Roberta C; Seth-Smith, Michelle L; Kramer, Yael; Buldo-Licciardi, Julia; Kohlrausch, Fabiana B; Keefe, David L
2015-11-01
The effect of age on telomere length heterogeneity in men has not been studied previously. Our aims were to determine the relationship between variation in sperm telomere length (STL), men's age, and semen parameters in spermatozoa from men undergoing in vitro fertilization (IVF) treatment. To perform this prospective cross-sectional pilot study, telomere length was estimated in 200 individual spermatozoa from men undergoing IVF treatment at the NYU Fertility Center. A novel single-cell telomere content assay (SCT-pqPCR) measured telomere length in individual spermatozoa. Telomere length among individual spermatozoa within an ejaculate varies markedly and increases with age. Older men not only have longer STL but also have more variable STL compared to younger men. STL from samples with normal semen parameters was significantly longer than that from samples with abnormal parameters, but STL did not differ between spermatozoa with normal versus abnormal morphology. The marked increase in STL heterogeneity as men age is consistent with a role for ALT during spermatogenesis. No data have yet reported the effect of age on STL heterogeneity. Based on these results, future studies should expand this modest sample size to search for molecular evidence of ALT in human testes during spermatogenesis.
The Fusion of Membranes and Vesicles: Pathway and Energy Barriers from Dissipative Particle Dynamics
Grafmüller, Andrea; Shillcock, Julian; Lipowsky, Reinhard
2009-01-01
The fusion of lipid bilayers is studied with dissipative particle dynamics simulations. First, to achieve control over membrane properties, the effects of individual simulation parameters are studied and optimized. Then, a large number of fusion events for a vesicle and a planar bilayer are simulated using the optimized parameter set. In the observed fusion pathway, configurations of individual lipids play an important role. Fusion starts with individual lipids assuming a splayed tail configuration with one tail inserted in each membrane. To determine the corresponding energy barrier, we measure the average work for interbilayer flips of a lipid tail, i.e., the average work to displace one lipid tail from one bilayer to the other. This energy barrier is found to depend strongly on a certain dissipative particle dynamics parameter, and, thus, can be adjusted in the simulations. Overall, three subprocesses have been identified in the fusion pathway. Their energy barriers are estimated to lie in the range 8–15 kBT. The fusion probability is found to possess a maximum at intermediate tension values. As one decreases the tension, the fusion probability seems to vanish before the tensionless membrane state is attained. This would imply that the tension has to exceed a certain threshold value to induce fusion. PMID:19348749
Kollmeier, Birger; Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T; Brand, Thomas
2016-09-07
To characterize the individual patient's hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The "typical" audiogram shapes from Bisgaard et al with or without a "typical" level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. © The Author(s) 2016.
Mountain, James E.; Santer, Peter; O’Neill, David P.; Smith, Nicholas M. J.; Ciaffoni, Luca; Couper, John H.; Ritchie, Grant A. D.; Hancock, Gus; Whiteley, Jonathan P.
2018-01-01
Inhomogeneity in the lung impairs gas exchange and can be an early marker of lung disease. We hypothesized that highly precise measurements of gas exchange contain sufficient information to quantify many aspects of the inhomogeneity noninvasively. Our aim was to explore whether one parameterization of lung inhomogeneity could both fit such data and provide reliable parameter estimates. A mathematical model of gas exchange in an inhomogeneous lung was developed, containing inhomogeneity parameters for compliance, vascular conductance, and dead space, all relative to lung volume. Inputs were respiratory flow, cardiac output, and the inspiratory and pulmonary arterial gas compositions. Outputs were expiratory and pulmonary venous gas compositions. All values were specified every 10 ms. Some parameters were set to physiologically plausible values. To estimate the remaining unknown parameters and inputs, the model was embedded within a nonlinear estimation routine to minimize the deviations between model and data for CO2, O2, and N2 flows during expiration. Three groups, each of six individuals, were studied: young (20–30 yr); old (70–80 yr); and patients with mild to moderate chronic obstructive pulmonary disease (COPD). Each participant undertook a 15-min measurement protocol six times. For all parameters reflecting inhomogeneity, highly significant differences were found between the three participant groups (P < 0.001, ANOVA). Intraclass correlation coefficients were 0.96, 0.99, and 0.94 for the parameters reflecting inhomogeneity in deadspace, compliance, and vascular conductance, respectively. We conclude that, for the particular participants selected, highly repeatable estimates for parameters reflecting inhomogeneity could be obtained from noninvasive measurements of respiratory gas exchange. NEW & NOTEWORTHY This study describes a new method, based on highly precise measures of gas exchange, that quantifies three distributions that are intrinsic to the lung. These distributions represent three fundamentally different types of inhomogeneity that together give rise to ventilation-perfusion mismatch and result in impaired gas exchange. The measurement technique has potentially broad clinical applicability because it is simple for both patient and operator, it does not involve ionizing radiation, and it is completely noninvasive. PMID:29074714
NASA Astrophysics Data System (ADS)
Bernard, C.; Keesee, B.; Philippoff, C.; Curran, S.; Lotz, J.; Powell, E.
2016-02-01
Investigators, including three REU interns, conducted an experiment to quantify parameters for an epidemiological model designed to estimate disease transmission in marine invertebrates. White spot syndrome virus (WSSV) is a highly pathogenic disease affecting commercially important penaeid shrimp fisheries worldwide. The virus devastates penaeid shrimp but other varieties of decapods may serve as reservoirs for disease by being less susceptible to WSSV or refractory to disease. Non-penaeid crustaceans are less susceptible to WSSV, and different species have variable resistance to the disease leading to different potential to serve as reservoirs for transmission of the disease to coastal penaeid fisheries. This study investigates virulence and transmission rates of WSSV in two palaemonid shrimp which are keystone members of coastal food webs, and effects of species interactions on transmission rates of WSSV are estimated in a laboratory setting as a proxy for natural habitats. Two species of grass shrimp were exposed to a Chinese strain of WSSV through feeding the test individuals with previously prepared, inoculated penaeid shrimp. Replicated tanks containing 30 animals were exposed to the virus in arenas containing one or both species for 24 hours, then isolated in 1 liter tanks and monitored. During the isolation period moribund individuals were preserved for later analysis. After 7 days all test individuals were analyzed using qPCR to determine WSSV presence and load in DNA. From these data transmission rates, mortality, and viral concentration were quantified and used as parameters in a simple epidemiological model.
Priol, Pauline; Mazerolle, Marc J; Imbeau, Louis; Drapeau, Pierre; Trudeau, Caroline; Ramière, Jessica
2014-06-01
Dynamic N-mixture models have been recently developed to estimate demographic parameters of unmarked individuals while accounting for imperfect detection. We propose an application of the Dail and Madsen (2011: Biometrics, 67, 577-587) dynamic N-mixture model in a manipulative experiment using a before-after control-impact design (BACI). Specifically, we tested the hypothesis of cavity limitation of a cavity specialist species, the northern flying squirrel, using nest box supplementation on half of 56 trapping sites. Our main purpose was to evaluate the impact of an increase in cavity availability on flying squirrel population dynamics in deciduous stands in northwestern Québec with the dynamic N-mixture model. We compared abundance estimates from this recent approach with those from classic capture-mark-recapture models and generalized linear models. We compared apparent survival estimates with those from Cormack-Jolly-Seber (CJS) models. Average recruitment rate was 6 individuals per site after 4 years. Nevertheless, we found no effect of cavity supplementation on apparent survival and recruitment rates of flying squirrels. Contrary to our expectations, initial abundance was not affected by conifer basal area (food availability) and was negatively affected by snag basal area (cavity availability). Northern flying squirrel population dynamics are not influenced by cavity availability at our deciduous sites. Consequently, we suggest that this species should not be considered an indicator of old forest attributes in our study area, especially in view of apparent wide population fluctuations across years. Abundance estimates from N-mixture models were similar to those from capture-mark-recapture models, although the latter had greater precision. Generalized linear mixed models produced lower abundance estimates, but revealed the same relationship between abundance and snag basal area. Apparent survival estimates from N-mixture models were higher and less precise than those from CJS models. However, N-mixture models can be particularly useful to evaluate management effects on animal populations, especially for species that are difficult to detect in situations where individuals cannot be uniquely identified. They also allow investigating the effects of covariates at the site level, when low recapture rates would require restricting classic CMR analyses to a subset of sites with the most captures.
Drummond, Alexei J; Nicholls, Geoff K; Rodrigo, Allen G; Solomon, Wiremu
2002-01-01
Molecular sequences obtained at different sampling times from populations of rapidly evolving pathogens and from ancient subfossil and fossil sources are increasingly available with modern sequencing technology. Here, we present a Bayesian statistical inference approach to the joint estimation of mutation rate and population size that incorporates the uncertainty in the genealogy of such temporally spaced sequences by using Markov chain Monte Carlo (MCMC) integration. The Kingman coalescent model is used to describe the time structure of the ancestral tree. We recover information about the unknown true ancestral coalescent tree, population size, and the overall mutation rate from temporally spaced data, that is, from nucleotide sequences gathered at different times, from different individuals, in an evolving haploid population. We briefly discuss the methodological implications and show what can be inferred, in various practically relevant states of prior knowledge. We develop extensions for exponentially growing population size and joint estimation of substitution model parameters. We illustrate some of the important features of this approach on a genealogy of HIV-1 envelope (env) partial sequences. PMID:12136032
Drummond, Alexei J; Nicholls, Geoff K; Rodrigo, Allen G; Solomon, Wiremu
2002-07-01
Molecular sequences obtained at different sampling times from populations of rapidly evolving pathogens and from ancient subfossil and fossil sources are increasingly available with modern sequencing technology. Here, we present a Bayesian statistical inference approach to the joint estimation of mutation rate and population size that incorporates the uncertainty in the genealogy of such temporally spaced sequences by using Markov chain Monte Carlo (MCMC) integration. The Kingman coalescent model is used to describe the time structure of the ancestral tree. We recover information about the unknown true ancestral coalescent tree, population size, and the overall mutation rate from temporally spaced data, that is, from nucleotide sequences gathered at different times, from different individuals, in an evolving haploid population. We briefly discuss the methodological implications and show what can be inferred, in various practically relevant states of prior knowledge. We develop extensions for exponentially growing population size and joint estimation of substitution model parameters. We illustrate some of the important features of this approach on a genealogy of HIV-1 envelope (env) partial sequences.
Modeling Test and Treatment Strategies for Presymptomatic Alzheimer Disease
Burke, James F.; Langa, Kenneth M.; Hayward, Rodney A.; Albin, Roger L.
2014-01-01
Objectives In this study, we developed a model of presymptomatic treatment of Alzheimer disease (AD) after a screening diagnostic evaluation and explored the circumstances required for an AD prevention treatment to produce aggregate net population benefit. Methods Monte Carlo simulation methods were used to estimate outcomes in a simulated population derived from data on AD incidence and mortality. A wide variety of treatment parameters were explored. Net population benefit was estimated in aggregated QALYs. Sensitivity analyses were performed by individually varying the primary parameters. Findings In the base-case scenario, treatment effects were uniformly positive, and net benefits increased with increasing age at screening. A highly efficacious treatment (i.e. relative risk 0.6) modeled in the base-case is estimated to save 20 QALYs per 1000 patients screened and 221 QALYs per 1000 patients treated. Conclusions Highly efficacious presymptomatic screen and treat strategies for AD are likely to produce substantial aggregate population benefits that are likely greater than the benefits of aspirin in primary prevention of moderate risk cardiovascular disease (28 QALYS per 1000 patients treated), even in the context of an imperfect treatment delivery environment. PMID:25474698
Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice
2014-05-07
Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Individual-based modelling of population growth and diffusion in discrete time.
Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone
2017-01-01
Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Escos, J.; Alados, C.L.; Emlen, John M.
1994-01-01
A stage-class population model with density-feedback term included was used to identify the most critical parameters determining the population dynamics of female Spanish ibex (Capra pyrenaica) in southern Spain. A population in the Cazorla and Segura mountains is rapidly declining, but the eastern Sierra Nevada population is growing. The stable population density obtained using estimated values of kid and adult survival (0.49 and 0.87, respectively) and with fecundity equal to 0.367 in the absence of density feedback is 12.7 or 16.82 individuals/km2, based on a non-time-lagged and a time-lagged model, respectively. Given the maximum estimate of fecundity and an adult survival rate of 0.87, a kid survival rate of at least 0.41 is required to avoid extinction. At the minimum fecundity estimate, kid survival would have to exceed 0.52. Elasticities were used to estimate the influence of variation in life-cycle parameters on the intrinsic rate of increase. Adult survival is the most critical parameter, while fecundity and juvenile survival are less important. An increase in adult survival from 0.87 to 0.91 in the Cazorla and Segura mountains population would almost stabilize the population in the absence of stochastic variation, while the same increase in the Sierra Nevada population would yield population growth of 4–5% per annum. A reduction in adult survival to 0.83 results in population decline in both cases.
NASA Astrophysics Data System (ADS)
Aioanei, Daniel; Samorì, Bruno; Brucale, Marco
2009-12-01
Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.
Whittington, Jesse; Sawaya, Michael A
2015-01-01
Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal's home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786-1.071) for females, 0.844 (0.703-0.975) for males, and 0.882 (0.779-0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758-1.024) for females, 0.825 (0.700-0.948) for males, and 0.863 (0.771-0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park's population of grizzly bears requires continued conservation-oriented management actions.
Logistic regression of family data from retrospective study designs.
Whittemore, Alice S; Halpern, Jerry
2003-11-01
We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.
NASA Technical Reports Server (NTRS)
Cummins, Phil R.; Wahr, John M.
1993-01-01
In this study we consider the influence of the earth's free core nutation (FCN) on diurnal tidal admittance estimates for 11 stations of the globally distributed International Deployment of Accelerometers network. The FCN causes a resonant enhancement of the diurnal admittances which can be used to estimate some properties of the FCN. Estimations of the parameters describing the FCN (period, Q, and resonance strength) are made using data from individual stations and many stations simultaneously. These yield a result for the period of 423-452 sidereal days, which is shorter than theory predicts but is in agreement with many previous studies and suggests that the dynamical ellipticity of the core may be greater than its hydrostatic value.
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.
Curtin, A.J.; Zug, G.R.; Medica, P.A.; Spotila, J.R.
2008-01-01
Eight desert tortoises Gopherus agassizii from a long-term mark-recapture study in the Mojave Desert, Nevada, USA, afforded an opportunity to examine the accuracy of skeletochronological age estimation on tortoises from a seasonal, yet environmentally erratic environment. These 8 tortoises were marked as hatchlings or within the first 2 yr of life, and their carcasses were salvaged from predator kills. Using d blind protocol, 2 skeletochronological protocols (correction-factor and ranking) provided age estimates for a set of 4 bony elements (humerus, scapula, femur, ilium) from these tortoises of known age. The age at death of the tortoises ranged from 15 to 50 yr. The most accurate protocol - ranking using the growth layers within each of the 4 elements - provided estimates from 21 to 47 yr, with the highest accuracy from the ilia. The results indicate that skeletochronological age estimation provides a reasonably accurate method for assessing the age at death of desert tortoises and, if used with a large sample of individuals, will provide a valuable tool for examining age-related mortality parameters in desert tortoise and likely in other gopher tortoises (Gopherus). ?? Inter-Research 2008.
Schmidt, Sven; Schramm, Danilo; Ribbecke, Sebastian; Schulz, Ronald; Wittschieber, Daniel; Olze, Andreas; Vieth, Volker; Ramsthaler, H Frank; Pfischel, Klaus; Pfeiffer, Heidi; Geserick, Gunther; Schmeling, Andreas
2016-01-01
The dramatic rise in the number of refugees entering Germany means that age estimation for juveniles and young adults whose age is unclear but relevant to legal and official procedures has become more important than ever. Until now, whether and to what extent the combination of methods recommended by the Study Group on Forensic Age Diagnostics has resulted in a reduction of the range of scatter of the summarized age diagnosis has been unclear. Hand skeletal age, third molar mineralization stage and ossification stage of the medial clavicular epiphyses were determined for 307 individuals aged between 10 and 29 at time of death on whom autopsies were performed at the Institutes of Legal Medicine in Berlin, Frankfurt am Main and Hamburg between 2001 and 2011. To measure the range of scatter, linear regression analysis was used to calculate the standard error of estimate for each of the above methods individually and in combination. It was found that combining the above methods led to a reduction in the range of scatter. Due to various limitations of the study, the statistical parameters determined cannot, however, be used for age estimation practice.
Ho, Tiffany C; Zhang, Shunan; Sacchet, Matthew D; Weng, Helen; Connolly, Colm G; Henje Blom, Eva; Han, Laura K M; Mobayed, Nisreen O; Yang, Tony T
2016-01-01
While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed.
Ho, Tiffany C.; Zhang, Shunan; Sacchet, Matthew D.; Weng, Helen; Connolly, Colm G.; Henje Blom, Eva; Han, Laura K. M.; Mobayed, Nisreen O.; Yang, Tony T.
2016-01-01
While the extant literature has focused on major depressive disorder (MDD) as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions), little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL) who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI). We analyzed the behavioral data using a sequential sampling model of response time (RT) commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA) model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA), the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed. PMID:26869950
Marginal and Random Intercepts Models for Longitudinal Binary Data With Examples From Criminology.
Long, Jeffrey D; Loeber, Rolf; Farrington, David P
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides individual-level information including information about heterogeneity of growth. It is shown how a type of numerical averaging can be used with the random intercepts model to obtain group-level information, thus approximating individual and marginal aspects of the LMM. The types of inferences associated with each model are illustrated with longitudinal criminal offending data based on N = 506 males followed over a 22-year period. Violent offending indexed by official records and self-report were analyzed, with the marginal model estimated using generalized estimating equations and the random intercepts model estimated using maximum likelihood. The results show that the numerical averaging based on the random intercepts can produce prediction curves almost identical to those obtained directly from the marginal model parameter estimates. The results provide a basis for contrasting the models and the estimation procedures and key features are discussed to aid in selecting a method for empirical analysis.
Yong, Alan K.; Hough, Susan E.; Iwahashi, Junko; Braverman, Amy
2012-01-01
We present an approach based on geomorphometry to predict material properties and characterize site conditions using the VS30 parameter (time‐averaged shear‐wave velocity to a depth of 30 m). Our framework consists of an automated terrain classification scheme based on taxonomic criteria (slope gradient, local convexity, and surface texture) that systematically identifies 16 terrain types from 1‐km spatial resolution (30 arcsec) Shuttle Radar Topography Mission digital elevation models (SRTM DEMs). Using 853 VS30 values from California, we apply a simulation‐based statistical method to determine the mean VS30 for each terrain type in California. We then compare the VS30 values with models based on individual proxies, such as mapped surface geology and topographic slope, and show that our systematic terrain‐based approach consistently performs better than semiempirical estimates based on individual proxies. To further evaluate our model, we apply our California‐based estimates to terrains of the contiguous United States. Comparisons of our estimates with 325 VS30 measurements outside of California, as well as estimates based on the topographic slope model, indicate our method to be statistically robust and more accurate. Our approach thus provides an objective and robust method for extending estimates of VS30 for regions where in situ measurements are sparse or not readily available.
NASA Astrophysics Data System (ADS)
Daniell, James; Wenzel, Friedemann
2014-05-01
A review of over 200 fatality models over the past 50 years for earthquake loss estimation from various authors has identified key parameters that influence fatality estimation in each of these models. These are often very specific and cannot be readily adapted globally. In the doctoral dissertation of the author, a new method is used for regression of fatalities to intensity using loss functions based not only on fatalities, but also using population models and other socioeconomic parameters created through time for every country worldwide for the period 1900-2013. A calibration of functions was undertaken from 1900-2008, and each individual quake analysed from 2009-2013 in real-time, in conjunction with www.earthquake-report.com. Using the CATDAT Damaging Earthquakes Database containing socioeconomic loss information for 7208 damaging earthquake events from 1900-2013 including disaggregation of secondary effects, fatality estimates for over 2035 events have been re-examined from 1900-2013. In addition, 99 of these events have detailed data for the individual cities and towns or have been reconstructed to create a death rate as a percentage of population. Many historical isoseismal maps and macroseismic intensity datapoint surveys collected globally, have been digitised and modelled covering around 1353 of these 2035 fatal events, to include an estimate of population, occupancy and socioeconomic climate at the time of the event at each intensity bracket. In addition, 1651 events without fatalities but causing damage have also been examined in this way. The production of socioeconomic and engineering indices such as HDI and building vulnerability has been undertaken on a country-level and state/province-level leading to a dataset allowing regressions not only using a static view of risk, but also allowing for the change in the socioeconomic climate between the earthquake events to be undertaken. This means that a year 1920 event in a country, will not simply be regressed against a year 2000 event, but normalised. A global human development index (HDI) (life expectancy, education and income) was developed and collected for the first time from 1900-2013 globally on a country and province level allowing for a very useful parameter in the regression. In addition, the occupancy rate from the time of day that the event occurred, as well as population density and individual earthquake attributes like the existence of a foreshock were also examined for the 3004 events in the regression analysis. Where an event has not occurred in a country previously, a regionalisation strategy based on building typologies, seismic code index, building practice, climate, earthquake history and socioeconomic climate is proposed. The result is a set of "social fragility functions" calculating fatalities for use in any country worldwide using the parameters of macroseismic intensity, population, HDI, time of day and occupancy, that provide a robust accurate method, which has not only been calibrated to country level data but to town and city data through time. The estimates will continue to be used in conjunction with Earthquake Report, a non-profit worldwide earthquake reporting website and has shown very promising results from 2010-2013 for rapid estimates of fatalities globally.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
Modelling probabilities of heavy precipitation by regional approaches
NASA Astrophysics Data System (ADS)
Gaal, L.; Kysely, J.
2009-09-01
Extreme precipitation events are associated with large negative consequences for human society, mainly as they may trigger floods and landslides. The recent series of flash floods in central Europe (affecting several isolated areas) on June 24-28, 2009, the worst one over several decades in the Czech Republic as to the number of persons killed and the extent of damage to buildings and infrastructure, is an example. Estimates of growth curves and design values (corresponding e.g. to 50-yr and 100-yr return periods) of precipitation amounts, together with their uncertainty, are important in hydrological modelling and other applications. The interest in high quantiles of precipitation distributions is also related to possible climate change effects, as climate model simulations tend to project increased severity of precipitation extremes in a warmer climate. The present study compares - in terms of Monte Carlo simulation experiments - several methods to modelling probabilities of precipitation extremes that make use of ‘regional approaches’: the estimation of distributions of extremes takes into account data in a ‘region’ (‘pooling group’), in which one may assume that the distributions at individual sites are identical apart from a site-specific scaling factor (the condition is referred to as ‘regional homogeneity’). In other words, all data in a region - often weighted in some way - are taken into account when estimating the probability distribution of extremes at a given site. The advantage is that sampling variations in the estimates of model parameters and high quantiles are to a large extent reduced compared to the single-site analysis. We focus on the ‘region-of-influence’ (ROI) method which is based on the identification of unique pooling groups (forming the database for the estimation) for each site under study. The similarity of sites is evaluated in terms of a set of site attributes related to the distributions of extremes. The issue of the size of the region is linked with a built-in test on regional homogeneity of data. Once a pooling group is delineated, weights based on a dissimilarity measure are assigned to individual sites involved in a pooling group, and all (weighted) data are employed in the estimation of model parameters and high quantiles at a given location. The ROI method is compared with the Hosking-Wallis (HW) regional frequency analysis, which is based on delineating fixed regions (instead of flexible pooling groups) and assigning unit weights to all sites in a region. The comparison of the performance of the individual regional models makes use of data on annual maxima of 1-day precipitation amounts at 209 stations covering the Czech Republic, with altitudes ranging from 150 to 1490 m a.s.l. We conclude that the ROI methodology is superior to the HW analysis, particularly for very high quantiles (100-yr return values). Another advantage of the ROI approach is that subjective decisions - unavoidable when fixed regions in the HW analysis are formed - may efficiently be suppressed, and almost all settings of the ROI method may be justified by results of the simulation experiments. The differences between (any) regional method and single-site analysis are very pronounced and suggest that the at-site estimation is highly unreliable. The ROI method is then applied to estimate high quantiles of precipitation amounts at individual sites. The estimates and their uncertainty are compared with those from a single-site analysis. We focus on the eastern part of the Czech Republic, i.e. an area with complex orography and a particularly pronounced role of Mediterranean cyclones in producing precipitation extremes. The design values are compared with precipitation amounts recorded during the recent heavy precipitation events, including the one associated with the flash flood on June 24, 2009. We also show that the ROI methodology may easily be transferred to the analysis of precipitation extremes in climate model outputs. It efficiently reduces (random) variations in the estimates of parameters of the extreme value distributions in individual gridboxes that result from large spatial variability of heavy precipitation, and represents a straightforward tool for ‘weighting’ data from neighbouring gridboxes within the estimation procedure. The study is supported by the Grant Agency of AS CR under project B300420801.
Riedel, Michael; Collett, Timothy S.; Kim, H.-S.; Bahk, J.-J.; Kim, J.-H.; Ryu, B.-J.; Kim, G.-Y.
2013-01-01
Gas hydrate saturation estimates were obtained from an Archie-analysis of the Logging-While-Drilling (LWD) electrical resistivity logs under consideration of the regional geological framework of sediment deposition in the Ulleung Basin, East Sea, of Korea. Porosity was determined from the LWD bulk density log and core-derived values of grain density. In situ measurements of pore-fluid salinity as well as formation temperature define a background trend for pore-fluid resistivity at each drill site. The LWD data were used to define sets of empirical Archie-constants for different depth-intervals of the logged borehole at all sites drilled during the second Ulleung Basin Gas Hydrate Drilling Expedition (UBGH2). A clustering of data with distinctly different trend-lines is evident in the cross-plot of porosity and formation factor for all sites drilled during UBGH2. The reason for the clustering is related to the difference between hemipelagic sediments (mostly covering the top ∼100 mbsf) and mass-transport deposits (MTD) and/or the occurrence of biogenic opal. For sites located in the north-eastern portion of the Ulleung Basin a set of individual Archie-parameters for a shallow depth interval (hemipelagic) and a deeper MTD zone was achieved. The deeper zone shows typically higher resistivities for the same range of porosities seen in the upper zone, reflecting a shift in sediment properties. The presence of large amounts of biogenic opal (up to and often over 50% as defined by XRD data) was especially observed at Sites UBGH2-2_1 and UBGH2-2_2 (as well as UBGH1-9 from a previous drilling expedition in 2007). The boundary between these two zones can also easily be identified in gamma-ray logs, which also show unusually low readings in the opal-rich interval. Only by incorporating different Archie-parameters for the different zones a reasonable estimate of gas hydrate saturation was achieved that also matches results from other techniques such as pore-fluid freshening, velocity-based calculations, and pressure-core degassing experiments. Seismically, individual boundaries between zones were determined using a grid of regional 2D seismic data. Zoning from the Archie-analysis for sites in the south-western portion of the Ulleung Basin was also observed, but at these sites it is linked to individually stacked MTDs only and does not reflect a mineralogical occurrence of biogenic opal or hemipelagic sedimentation. The individual MTD events represent differently compacted material often associated with a strong decrease in porosity (and increase in density), warranting a separate set of empirical Archie-parameters.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
Novel multireceiver communication systems configurations based on optimal estimation theory
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1992-01-01
A novel multireceiver configuration for carrier arraying and/or signal arraying is presented. The proposed configuration is obtained by formulating the carrier and/or signal arraying problem as an optimal estimation problem, and it consists of two stages. The first stage optimally estimates various phase processes received at different receivers with coupled phase-locked loops wherein the individual loops acquire and track their respective receivers' phase processes but are aided by each other in an optimal manner via LF error signals. The proposed configuration results in the minimization of the the effective radio loss at the combiner output, and thus maximization of energy per bit to noise power spectral density ratio is achieved. A novel adaptive algorithm for the estimator of the signal model parameters when these are not known a priori is also presented.
A model-based approach to sample size estimation in recent onset type 1 diabetes.
Bundy, Brian N; Krischer, Jeffrey P
2016-11-01
The area under the curve C-peptide following a 2-h mixed meal tolerance test from 498 individuals enrolled on five prior TrialNet studies of recent onset type 1 diabetes from baseline to 12 months after enrolment were modelled to produce estimates of its rate of loss and variance. Age at diagnosis and baseline C-peptide were found to be significant predictors, and adjusting for these in an ANCOVA resulted in estimates with lower variance. Using these results as planning parameters for new studies results in a nearly 50% reduction in the target sample size. The modelling also produces an expected C-peptide that can be used in observed versus expected calculations to estimate the presumption of benefit in ongoing trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Z; Vijayan, S; Rana, V
2015-06-15
Purpose: A system was developed that automatically calculates the organ and effective dose for individual fluoroscopically-guided procedures using a log of the clinical exposure parameters. Methods: We have previously developed a dose tracking system (DTS) to provide a real-time color-coded 3D- mapping of skin dose. This software produces a log file of all geometry and exposure parameters for every x-ray pulse during a procedure. The data in the log files is input into PCXMC, a Monte Carlo program that calculates organ and effective dose for projections and exposure parameters set by the user. We developed a MATLAB program to readmore » data from the log files produced by the DTS and to automatically generate the definition files in the format used by PCXMC. The processing is done at the end of a procedure after all exposures are completed. Since there are thousands of exposure pulses with various parameters for fluoroscopy, DA and DSA and at various projections, the data for exposures with similar parameters is grouped prior to entry into PCXMC to reduce the number of Monte Carlo calculations that need to be performed. Results: The software developed automatically transfers data from the DTS log file to PCXMC and runs the program for each grouping of exposure pulses. When the dose from all exposure events are calculated, the doses for each organ and all effective doses are summed to obtain procedure totals. For a complicated interventional procedure, the calculations can be completed on a PC without manual intervention in less than 30 minutes depending on the level of data grouping. Conclusion: This system allows organ dose to be calculated for individual procedures for every patient without tedious calculations or data entry so that estimates of stochastic risk can be obtained in addition to the deterministic risk estimate provided by the DTS. Partial support from NIH grant R01EB002873 and Toshiba Medical Systems Corp.« less
NASA Astrophysics Data System (ADS)
Corbin, A. E.; Timmermans, J.; Hauser, L.; Bodegom, P. V.; Soudzilovskaia, N. A.
2017-12-01
There is a growing demand for accurate land surface parameterization from remote sensing (RS) observations. This demand has not been satisfied, because most estimation schemes apply 1) a single-sensor single-scale approach, and 2) require specific key-variables to be `guessed'. This is because of the relevant observational information required to accurately retrieve parameters of interest. Consequently, many schemes assume specific variables to be constant or not present; subsequently leading to more uncertainty. In this aspect, the MULTIscale SENTINEL land surface information retrieval Platform (MULTIPLY) was created. MULTIPLY couples a variety of RS sources with Radiative Transfer Models (RTM) over varying spectral ranges using data-assimilation to estimate geophysical parameters. In addition, MULTIPLY also uses prior information about the land surface to constrain the retrieval problem. This research aims to improve the retrieval of plant biophysical parameters through the use of priors of biophysical parameters/plant traits. Of particular interest are traits (physical, morphological or chemical trait) affecting individual performance and fitness of species. Plant traits that are able to be retrieved via RS and with RTMs include traits such as leaf-pigments, leaf water, LAI, phenols, C/N, etc. In-situ data for plant traits that are retrievable via RS techniques were collected for a meta-analysis from databases such as TRY, Ecosis, and individual collaborators. Of particular interest are the following traits: chlorophyll, carotenoids, anthocyanins, phenols, leaf water, and LAI. ANOVA statistics were generated for each traits according to species, plant functional groups (such as evergreens, grasses, etc.), and the trait itself. Afterwards, traits were also compared using covariance matrices. Using these as priors, MULTIPLY was is used to retrieve several plant traits in two validation sites in the Netherlands (Speulderbos) and in Finland (Sodankylä). Initial comparisons show significant improved results over non-a priori based retrievals.
ONODA, Tomoaki; YAMAMOTO, Ryuta; SAWAMURA, Kyohei; MURASE, Harutaka; NAMBO, Yasuo; INOUE, Yoshinobu; MATSUI, Akira; MIYAKE, Takeshi; HIRAI, Nobuhiro
2014-01-01
ABSTRACT We propose an approach of estimating individual growth curves based on the birthday information of Japanese Thoroughbred horses, with considerations of the seasonal compensatory growth that is a typical characteristic of seasonal breeding animals. The compensatory growth patterns appear during only the winter and spring seasons in the life of growing horses, and the meeting point between winter and spring depends on the birthday of each horse. We previously developed new growth curve equations for Japanese Thoroughbreds adjusting for compensatory growth. Based on the equations, a parameter denoting the birthday information was added for the modeling of the individual growth curves for each horse by shifting the meeting points in the compensatory growth periods. A total of 5,594 and 5,680 body weight and age measurements of Thoroughbred colts and fillies, respectively, and 3,770 withers height and age measurements of both sexes were used in the analyses. The results of predicted error difference and Akaike Information Criterion showed that the individual growth curves using birthday information better fit to the body weight and withers height data than not using them. The individual growth curve for each horse would be a useful tool for the feeding managements of young Japanese Thoroughbreds in compensatory growth periods. PMID:25013356
Demographic analysis from summaries of an age-structured population
Link, William A.; Royle, J. Andrew; Hatfield, Jeff S.
2003-01-01
Demographic analyses of age-structured populations typically rely on life history data for individuals, or when individual animals are not identified, on information about the numbers of individuals in each age class through time. While it is usually difficult to determine the age class of a randomly encountered individual, it is often the case that the individual can be readily and reliably assigned to one of a set of age classes. For example, it is often possible to distinguish first-year from older birds. In such cases, the population age structure can be regarded as a latent variable governed by a process prior, and the data as summaries of this latent structure. In this article, we consider the problem of uncovering the latent structure and estimating process parameters from summaries of age class information. We present a demographic analysis for the critically endangered migratory population of whooping cranes (Grus americana), based only on counts of first-year birds and of older birds. We estimate age and year-specific survival rates. We address the controversial issue of whether management action on the breeding grounds has influenced recruitment, relating recruitment rates to the number of seventh-year and older birds, and examining the pattern of variation through time in this rate.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
Etalon (standard) for surface potential distribution produced by electric activity of the heart.
Szathmáry, V; Ruttkay-Nedecký, I
1981-01-01
The authors submit etalon (standard) equipotential maps as an aid in the evaluation of maps of surface potential distributions in living subjects. They were obtained by measuring potentials on the surface of an electrolytic tank shaped like the thorax. The individual etalon maps were determined in such a way that the parameters of the physical dipole forming the source of the electric field in the tank corresponded to the mean vectorcardiographic parameters measured in a healthy population sample. The technique also allows a quantitative estimate of the degree of non-dipolarity of the heart as the source of the electric field.
Assessing Forest NPP: BIOME-BGC Predictions versus BEF Derived Estimates
NASA Astrophysics Data System (ADS)
Hasenauer, H.; Pietsch, S. A.; Petritsch, R.
2007-05-01
Forest productivity has always been a major issue within sustainable forest management. While in the past terrestrial forest inventory data have been the major source for assessing forest productivity, recent developments in ecosystem modeling offer an alternative approach using ecosystem models such as Biome-BGC to estimate Net Primary Production (NPP). In this study we compare two terrestrial driven approaches for assessing NPP: (i) estimates from a species specific adaptation of the biogeochemical ecosystem model BIOME-BGC calibrated for Alpine conditions; and (ii) NPP estimates derived from inventory data using biomass expansion factors (BEF). The forest inventory data come from 624 sample plots across Austria and consist of repeated individual tree observations and include growth as well as soil and humus information. These locations are covered with spruce, beech, oak, pine and larch stands, thus addressing the main Austrian forest types. 144 locations were previously used in a validating effort to produce species-specific parameter estimates of the ecosystem model. The remaining 480 sites are from the Austrian National Forest Soil Survey carried out at the Federal Research and Training Centre for Forests, Natural Hazards and Landscape (BFW). By using diameter at breast height (dbh) and height (h) volume and subsequently biomass of individual trees were calculated, aggregated for the whole forest stand and compared with the model output. Regression analyses were performed for both volume and biomass estimates.
Model-data integration to improve the LPJmL dynamic global vegetation model
NASA Astrophysics Data System (ADS)
Forkel, Matthias; Thonicke, Kirsten; Schaphoff, Sibyll; Thurner, Martin; von Bloh, Werner; Dorigo, Wouter; Carvalhais, Nuno
2017-04-01
Dynamic global vegetation models show large uncertainties regarding the development of the land carbon balance under future climate change conditions. This uncertainty is partly caused by differences in how vegetation carbon turnover is represented in global vegetation models. Model-data integration approaches might help to systematically assess and improve model performances and thus to potentially reduce the uncertainty in terrestrial vegetation responses under future climate change. Here we present several applications of model-data integration with the LPJmL (Lund-Potsdam-Jena managed Lands) dynamic global vegetation model to systematically improve the representation of processes or to estimate model parameters. In a first application, we used global satellite-derived datasets of FAPAR (fraction of absorbed photosynthetic activity), albedo and gross primary production to estimate phenology- and productivity-related model parameters using a genetic optimization algorithm. Thereby we identified major limitations of the phenology module and implemented an alternative empirical phenology model. The new phenology module and optimized model parameters resulted in a better performance of LPJmL in representing global spatial patterns of biomass, tree cover, and the temporal dynamic of atmospheric CO2. Therefore, we used in a second application additionally global datasets of biomass and land cover to estimate model parameters that control vegetation establishment and mortality. The results demonstrate the ability to improve simulations of vegetation dynamics but also highlight the need to improve the representation of mortality processes in dynamic global vegetation models. In a third application, we used multiple site-level observations of ecosystem carbon and water exchange, biomass and soil organic carbon to jointly estimate various model parameters that control ecosystem dynamics. This exercise demonstrates the strong role of individual data streams on the simulated ecosystem dynamics which consequently changed the development of ecosystem carbon stocks and fluxes under future climate and CO2 change. In summary, our results demonstrate challenges and the potential of using model-data integration approaches to improve a dynamic global vegetation model.
Using simulation to aid trial design: Ring-vaccination trials.
Hitchings, Matt David Thomas; Grais, Rebecca Freeman; Lipsitch, Marc
2017-03-01
The 2014-6 West African Ebola epidemic highlights the need for rigorous, rapid clinical trial methods for vaccines. A challenge for trial design is making sample size calculations based on incidence within the trial, total vaccine effect, and intracluster correlation, when these parameters are uncertain in the presence of indirect effects of vaccination. We present a stochastic, compartmental model for a ring vaccination trial. After identification of an index case, a ring of contacts is recruited and either vaccinated immediately or after 21 days. The primary outcome of the trial is total vaccine effect, counting cases only from a pre-specified window in which the immediate arm is assumed to be fully protected and the delayed arm is not protected. Simulation results are used to calculate necessary sample size and estimated vaccine effect. Under baseline assumptions about vaccine properties, monthly incidence in unvaccinated rings and trial design, a standard sample-size calculation neglecting dynamic effects estimated that 7,100 participants would be needed to achieve 80% power to detect a difference in attack rate between arms, while incorporating dynamic considerations in the model increased the estimate to 8,900. This approach replaces assumptions about parameters at the ring level with assumptions about disease dynamics and vaccine characteristics at the individual level, so within this framework we were able to describe the sensitivity of the trial power and estimated effect to various parameters. We found that both of these quantities are sensitive to properties of the vaccine, to setting-specific parameters over which investigators have little control, and to parameters that are determined by the study design. Incorporating simulation into the trial design process can improve robustness of sample size calculations. For this specific trial design, vaccine effectiveness depends on properties of the ring vaccination design and on the measurement window, as well as the epidemiologic setting.
Kinetics and mechanism of olefin catalytic hydroalumination by organoaluminum compounds
NASA Astrophysics Data System (ADS)
Koledina, K. F.; Gubaidullin, I. M.
2016-05-01
The complex reaction mechanism of α-olefin catalytic hydroalumination by alkylalanes is investigated via mathematical modeling that involves plotting the kinetic models for the individual reactions that make up a complex system and a separate study of their principles. Kinetic parameters of olefin catalytic hydroalumination are estimated. Activation energies of the possible steps of the schemes of complex reaction mechanisms are compared and possible reaction pathways are determined.
Individual differences in emotion word processing: A diffusion model analysis.
Mueller, Christina J; Kuchinke, Lars
2016-06-01
The exploratory study investigated individual differences in implicit processing of emotional words in a lexical decision task. A processing advantage for positive words was observed, and differences between happy and fear-related words in response times were predicted by individual differences in specific variables of emotion processing: Whereas more pronounced goal-directed behavior was related to a specific slowdown in processing of fear-related words, the rate of spontaneous eye blinks (indexing brain dopamine levels) was associated with a processing advantage of happy words. Estimating diffusion model parameters revealed that the drift rate (rate of information accumulation) captures unique variance of processing differences between happy and fear-related words, with highest drift rates observed for happy words. Overall emotion recognition ability predicted individual differences in drift rates between happy and fear-related words. The findings emphasize that a significant amount of variance in emotion processing is explained by individual differences in behavioral data.
Costs and benefits of direct-to-consumer advertising: the case of depression.
Block, Adam E
2007-01-01
Direct-to-consumer advertising (DTCA) is legal in the US and New Zealand, but illegal in the rest of the world. Little or no research exists on the social welfare implications of DTCA. To quantify the total costs and benefits associated with both appropriate and inappropriate care due to DTCA, for the case of depression. A cost-benefit model was developed using parameter estimates from available survey, epidemiological and experimental data. The model estimates the total benefits and costs (year 2002 values) of new appropriate and inappropriate care stimulated by DTCA for depression. Uncertainty in model parameters is addressed with sensitivity analyses. This study provides evidence that 94% of new antidepressant use due to DTCA is from non-depressed individuals. However, the average health benefit to each new depressed user is 63-fold greater than the cost per treatment, creating a positive overall social welfare effect; a net benefit of >72 million US dollars. This analysis suggests that DTCA may lead to antidepressant treatment in 15-fold as many non-depressed people as depressed people. However, the costs of treating non-depressed people may be vastly outweighed by the much larger benefit accruing to treated depressed individuals. The cost-benefit ratio can be improved through better targeting of advertisements and higher quality treatment of depression.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
Čepl, Jaroslav; Holá, Dana; Stejskal, Jan; Korecký, Jiří; Kočová, Marie; Lhotáková, Zuzana; Tomášková, Ivana; Palovská, Markéta; Rothová, Olga; Whetten, Ross W; Kaňák, Jan; Albrechtová, Jana; Lstibůrek, Milan
2016-07-01
Current knowledge of the genetic mechanisms underlying the inheritance of photosynthetic activity in forest trees is generally limited, yet it is essential both for various practical forestry purposes and for better understanding of broader evolutionary mechanisms. In this study, we investigated genetic variation underlying selected chlorophyll a fluorescence (ChlF) parameters in structured populations of Scots pine (Pinus sylvestris L.) grown on two sites under non-stress conditions. These parameters were derived from the OJIP part of the ChlF kinetics curve and characterize individual parts of primary photosynthetic processes associated, for example, with the exciton trapping by light-harvesting antennae, energy utilization in photosystem II (PSII) reaction centers (RCs) and its transfer further down the photosynthetic electron-transport chain. An additive relationship matrix was estimated based on pedigree reconstruction, utilizing a set of highly polymorphic single sequence repeat markers. Variance decomposition was conducted using the animal genetic evaluation mixed-linear model. The majority of ChlF parameters in the analyzed pine populations showed significant additive genetic variation. Statistically significant heritability estimates were obtained for most ChlF indices, with the exception of DI0/RC, φD0 and φP0 (Fv/Fm) parameters. Estimated heritabilities varied around the value of 0.15 with the maximal value of 0.23 in the ET0/RC parameter, which indicates electron-transport flux from QA to QB per PSII RC. No significant correlation was found between these indices and selected growth traits. Moreover, no genotype × environment interaction (G × E) was detected, i.e., no differences in genotypes' performance between sites. The absence of significant G × E in our study is interesting, given the relatively low heritability found for the majority of parameters analyzed. Therefore, we infer that polygenic variability of these indices is selectively neutral. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Multispectrum retrieval techniques applied to Venus deep atmosphere and surface problems
NASA Astrophysics Data System (ADS)
Kappel, David; Arnold, Gabriele; Haus, Rainer
The Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) aboard ESA's Venus Express is continuously collecting nightside emission data (among others) from Venus. A radiative transfer model of Venus' atmosphere in conjunction with a suitable retrieval algorithm can be used to estimate atmospheric and surface parameters by fitting simulated spectra to the measured data. Because of the limited spectral resolution of VIRTIS-M-IR-spectra, that have been used so far, many different parameter sets can explain the same measurement equally well. As a common regulative measure, reasonable a priori knowledge of some parameters is applied to suppress solutions implausibly far from the expected range. It is beneficial to introduce a parallel coupled retrieval of several measurements. Since spa-tially and temporally contiguous measurements are not expected to originate from completely unrelated parameters, an assumed a priori correlation of the parameters during the retrieval can help to reduce arbitrary fluctuations of the solutions, to avoid subsidiary solutions, and to attenuate the interference of measurement noise by keeping the parameters close to a gen-eral trend. As an illustration, the resulting improvements for some swaths on the Northern hemisphere are presented. Some atmospheric features are still not very well constrained, for instance CO2 absorption under the extreme environmental conditions close to the surface. A broad band continuum due to far wing and collisional induced absorptions is commonly used to correct individual line absorption. Since the spectrally dependent continuum is constant for all measurements, the retrieval of parameters common to all spectra may be used to give some estimates of the continuum absorption. These estimates are necessary, for example, for the coupled parallel retrieval of a consistent local cloud modal composition, which in turn enables a refined surface emissivity retrieval. We gratefully acknowledge the support from the VIRTIS/Venus Express Team, from ASI, CNES, CNRS, and from the DFG funding the ongoing work.
Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary
2018-08-01
Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.
Imaging tooth enamel using zero echo time (ZTE) magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Rychert, Kevin M.; Zhu, Gang; Kmiec, Maciej M.; Nemani, Venkata K.; Williams, Benjamin B.; Flood, Ann B.; Swartz, Harold M.; Gimi, Barjor
2015-03-01
In an event where many thousands of people may have been exposed to levels of radiation that are sufficient to cause the acute radiation syndrome, we need technology that can estimate the absorbed dose on an individual basis for triage and meaningful medical decision making. Such dose estimates may be achieved using in vivo electron paramagnetic resonance (EPR) tooth biodosimetry, which measures the number of persistent free radicals that are generated in tooth enamel following irradiation. However, the accuracy of dose estimates may be impacted by individual variations in teeth, especially the amount and distribution of enamel in the inhomogeneous sensitive volume of the resonator used to detect the radicals. In order to study the relationship between interpersonal variations in enamel and EPR-based dose estimates, it is desirable to estimate these parameters nondestructively and without adding radiation to the teeth. Magnetic Resonance Imaging (MRI) is capable of acquiring structural and biochemical information without imparting additional radiation, which may be beneficial for many EPR dosimetry studies. However, the extremely short T2 relaxation time in tooth structures precludes tooth imaging using conventional MRI methods. Therefore, we used zero echo time (ZTE) MRI to image teeth ex vivo to assess enamel volumes and spatial distributions. Using these data in combination with the data on the distribution of the transverse radio frequency magnetic field from electromagnetic simulations, we then can identify possible sources of variations in radiation-induced signals detectable by EPR. Unlike conventional MRI, ZTE applies spatial encoding gradients during the RF excitation pulse, thereby facilitating signal acquisition almost immediately after excitation, minimizing signal loss from short T2 relaxation times. ZTE successfully provided volumetric measures of tooth enamel that may be related to variations that impact EPR dosimetry and facilitate the development of analytical procedures for individual dose estimates.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
IVS Pilot Project - Tropospheric Parameters
NASA Astrophysics Data System (ADS)
Boehm, J.; Schuh, H.; Engelhardt, G.; MacMillan, D.; Lanotte, R.; Tomasi, P.; Vereshchagina, I.; Haas, R.; Negusini, M.; Gubanov, V.
2003-04-01
In April 2002 the IVS (International VLBI Service for Geodesy and Astrometry) set up the IVS Pilot Project - Tropospheric Parameters and the Institute of Geodesy and Geophysics (IGG), Vienna, was asked to coordinate the project. After a call for participation six IVS Analysis Centers have joined the project and submitted their estimates of tropospheric parameters (wet and total zenith delays, horizontal gradients) for all IVS-R1 and IVS-R4 sessions since January 1st, 2002, on a regular basis. Using a two-step procedure the individual submissions are combined to stable and robust tropospheric parameters with 1h resolution and high accuracy. The zenith delays derived by VLBI are also compared with those provided by IGS (International GPS Service). At collocated sites (VLBI and GPS antennas at the same station) rather constant biases are found between the GPS and VLBI derived zenith delays, although both techniques are subject to the same tropospheric delays. Possible reasons for these biases are discussed.
Mapping forest canopy fuels in Yellowstone National Park using lidar and hyperspectral data
NASA Astrophysics Data System (ADS)
Halligan, Kerry Quinn
The severity and size of wildland fires in the forested western U.S have increased in recent years despite improvements in fire suppression efficiency. This, along with increased density of homes in the wildland-urban interface, has resulted in high costs for fire management and increased risks to human health, safety and property. Crown fires, in comparison to surface fires, pose an especially high risk due to their intensity and high rate of spread. Crown fire models require a range of quantitative fuel parameters which can be difficult and costly to obtain, but advances in lidar and hyperspectral sensor technologies hold promise for delivering these inputs. Further research is needed, however, to assess the strengths and limitations of these technologies and the most appropriate analysis methodologies for estimating crown fuel parameters from these data. This dissertation focuses on retrieving critical crown fuel parameters, including canopy height, canopy bulk density and proportion of dead canopy fuel, from airborne lidar and hyperspectral data. Remote sensing data were used in conjunction with detailed field data on forest parameters and surface reflectance measurements. A new method was developed for retrieving Digital Surface Model (DSM) and Digital Canopy Models (DCM) from first return lidar data. Validation data on individual tree heights demonstrated the high accuracy (r2 0.95) of the DCMs developed via this new algorithm. Lidar-derived DCMs were used to estimate critical crown fire parameters including available canopy fuel, canopy height and canopy bulk density with linear regression model r2 values ranging from 0.75 to 0.85. Hyperspectral data were used in conjunction with Spectral Mixture Analysis (SMA) to assess fuel quality in the form of live versus dead canopy proportions. Severity and stage of insect-caused forest mortality were estimated using the fractional abundance of green vegetation, non-photosynthetic vegetation and shade obtained from SMA. Proportion of insect attack was estimated with a linear model producing an r2 of 0.6 using SMA and bark endmembers from image and reference libraries. Fraction of red attack, with a possible link to increased crown fire risk, was estimated with an r2 of 0.45.
Byskov, M V; Fogh, A; Løvendahl, P
2017-12-01
Feed efficiency has the potential to be improved both through feeding, management, and breeding. Including feed efficiency in a selection index is limited by the fact that dry matter intake (DMI) recording is only feasible under research facilities, resulting in small data sets and, consequently, uncertain genetic parameter estimates. As a result, the need to record DMI indicator traits on a larger scale exists. Rumination time (RT), which is already recorded in commercial dairy herds by a sensor-based system, has been suggested as a potential DMI indicator. However, RT can only be a DMI indicator if it is heritable, correlates with DMI, and if the genetic parameters of RT in commercial herd settings are similar to those in research facilities. Therefore, the objective of our study was to estimate genetic parameters for RT and the related traits of DMI in primiparous Holstein cows, and to compare genetic parameters of rumination data between a research herd and 72 commercial herds. The estimated heritability values were all moderate for DMI (0.32-0.49), residual feed intake (0.23-0.36), energy-corrected milk (ECM) yield (0.49-0.70), and RT (0.14-0.44) found in the research herd. The estimated heritability values for ECM were lower for the commercial herds (0.08-0.35) than that for the research herd. The estimated heritability values for RT were similar for the 2 herd types (0.28-0.32). For the research herd, we found negative individual level correlations between RT and DMI (-0.24 to -0.09) and between RT and RFI (-0.34 to -0.03), and we found both positive and negative correlations between RT and ECM (-0.08 to 0.09). For the commercial herds, genetic correlations between RT and ECM were both positive and negative (-0.27 to 0.10). In conclusion, RT was not found to be a suitable indicator trait for feed intake and only a weak indicator of feed efficiency. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Brandsch, Rainer
2017-10-01
Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.