Tuo, Rui; Jeff Wu, C. F.
2016-07-19
Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Investigation of IRT-Based Equating Methods in the Presence of Outlier Common Items
ERIC Educational Resources Information Center
Hu, Huiqin; Rogers, W. Todd; Vukmirovic, Zarko
2008-01-01
Common items with inconsistent b-parameter estimates may have a serious impact on item response theory (IRT)--based equating results. To find a better way to deal with the outlier common items with inconsistent b-parameters, the current study investigated the comparability of 10 variations of four IRT-based equating methods (i.e., concurrent…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuo, Rui; Jeff Wu, C. F.
Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less
Two new methods to fit models for network meta-analysis with random inconsistency effects.
Law, Martin; Jackson, Dan; Turner, Rebecca; Rhodes, Kirsty; Viechtbauer, Wolfgang
2016-07-28
Meta-analysis is a valuable tool for combining evidence from multiple studies. Network meta-analysis is becoming more widely used as a means to compare multiple treatments in the same analysis. However, a network meta-analysis may exhibit inconsistency, whereby the treatment effect estimates do not agree across all trial designs, even after taking between-study heterogeneity into account. We propose two new estimation methods for network meta-analysis models with random inconsistency effects. The model we consider is an extension of the conventional random-effects model for meta-analysis to the network meta-analysis setting and allows for potential inconsistency using random inconsistency effects. Our first new estimation method uses a Bayesian framework with empirically-based prior distributions for both the heterogeneity and the inconsistency variances. We fit the model using importance sampling and thereby avoid some of the difficulties that might be associated with using Markov Chain Monte Carlo (MCMC). However, we confirm the accuracy of our importance sampling method by comparing the results to those obtained using MCMC as the gold standard. The second new estimation method we describe uses a likelihood-based approach, implemented in the metafor package, which can be used to obtain (restricted) maximum-likelihood estimates of the model parameters and profile likelihood confidence intervals of the variance components. We illustrate the application of the methods using two contrasting examples. The first uses all-cause mortality as an outcome, and shows little evidence of between-study heterogeneity or inconsistency. The second uses "ear discharge" as an outcome, and exhibits substantial between-study heterogeneity and inconsistency. Both new estimation methods give results similar to those obtained using MCMC. The extent of heterogeneity and inconsistency should be assessed and reported in any network meta-analysis. Our two new methods can be used to fit models for network meta-analysis with random inconsistency effects. They are easily implemented using the accompanying R code in the Additional file 1. Using these estimation methods, the extent of inconsistency can be assessed and reported.
Using a Linear Regression Method to Detect Outliers in IRT Common Item Equating
ERIC Educational Resources Information Center
He, Yong; Cui, Zhongmin; Fang, Yu; Chen, Hanwei
2013-01-01
Common test items play an important role in equating alternate test forms under the common item nonequivalent groups design. When the item response theory (IRT) method is applied in equating, inconsistent item parameter estimates among common items can lead to large bias in equated scores. It is prudent to evaluate inconsistency in parameter…
NASA Astrophysics Data System (ADS)
Zheng, Yuejiu; Gao, Wenkai; Ouyang, Minggao; Lu, Languang; Zhou, Long; Han, Xuebing
2018-04-01
State-of-charge (SOC) inconsistency impacts the power, durability and safety of the battery pack. Therefore, it is necessary to measure the SOC inconsistency of the battery pack with good accuracy. We explore a novel method for modeling and estimating the SOC inconsistency of lithium-ion (Li-ion) battery pack with low computation effort. In this method, a second-order RC model is selected as the cell mean model (CMM) to represent the overall performance of the battery pack. A hypothetical Rint model is employed as the cell difference model (CDM) to evaluate the SOC difference. The parameters of mean-difference model (MDM) are identified with particle swarm optimization (PSO). Subsequently, the mean SOC and the cell SOC differences are estimated by using extended Kalman filter (EKF). Finally, we conduct an experiment on a small Li-ion battery pack with twelve cells connected in series. The results show that the evaluated SOC difference is capable of tracking the changing of actual value after a quick convergence.
Test suite for image-based motion estimation of the brain and tongue
NASA Astrophysics Data System (ADS)
Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.
2017-03-01
Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an "image synthesis" test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head- brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield "ghost" shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation.
Test Suite for Image-Based Motion Estimation of the Brain and Tongue
Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.
2017-01-01
Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an “image synthesis” test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head-brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield “ghost” shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation. PMID:28781414
Inference regarding multiple structural changes in linear models with endogenous regressors☆
Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia
2012-01-01
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021
NASA Astrophysics Data System (ADS)
Maltz, Jonathan S.
2000-11-01
We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.
NASA Astrophysics Data System (ADS)
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Examination of the Gender-Student Engagement Relationship at One University
ERIC Educational Resources Information Center
Tison, Emilee B.; Bateman, Tanner; Culver, Steven M.
2011-01-01
Research examining the relationship between gender and student engagement at the post secondary level has provided mixed results. The current study explores two possible reasons for lack of clarity regarding this relationship: improper parameter estimation resulting from a lack of multi-level analyses and inconsistent conceptions/measures of…
Doubly robust nonparametric inference on the average treatment effect.
Benkeser, D; Carone, M; Laan, M J Van Der; Gilbert, P B
2017-12-01
Doubly robust estimators are widely used to draw inference about the average effect of a treatment. Such estimators are consistent for the effect of interest if either one of two nuisance parameters is consistently estimated. However, if flexible, data-adaptive estimators of these nuisance parameters are used, double robustness does not readily extend to inference. We present a general theoretical study of the behaviour of doubly robust estimators of an average treatment effect when one of the nuisance parameters is inconsistently estimated. We contrast different methods for constructing such estimators and investigate the extent to which they may be modified to also allow doubly robust inference. We find that while targeted minimum loss-based estimation can be used to solve this problem very naturally, common alternative frameworks appear to be inappropriate for this purpose. We provide a theoretical study and a numerical evaluation of the alternatives considered. Our simulations highlight the need for and usefulness of these approaches in practice, while our theoretical developments have broad implications for the construction of estimators that permit doubly robust inference in other problems.
VLBI-derived troposphere parameters during CONT08
NASA Astrophysics Data System (ADS)
Heinkelmann, R.; Böhm, J.; Bolotin, S.; Engelhardt, G.; Haas, R.; Lanotte, R.; MacMillan, D. S.; Negusini, M.; Skurikhina, E.; Titov, O.; Schuh, H.
2011-07-01
Time-series of zenith wet and total troposphere delays as well as north and east gradients are compared, and zenith total delays ( ZTD) are combined on the level of parameter estimates. Input data sets are provided by ten Analysis Centers (ACs) of the International VLBI Service for Geodesy and Astrometry (IVS) for the CONT08 campaign (12-26 August 2008). The inconsistent usage of meteorological data and models, such as mapping functions, causes systematics among the ACs, and differing parameterizations and constraints add noise to the troposphere parameter estimates. The empirical standard deviation of ZTD among the ACs with regard to an unweighted mean is 4.6 mm. The ratio of the analysis noise to the observation noise assessed by the operator/software impact (OSI) model is about 2.5. These and other effects have to be accounted for to improve the intra-technique combination of VLBI-derived troposphere parameters. While the largest systematics caused by inconsistent usage of meteorological data can be avoided and the application of different mapping functions can be considered by applying empirical corrections, the noise has to be modeled in the stochastic model of intra-technique combination. The application of different stochastic models shows no significant effects on the combined parameters but results in different mean formal errors: the mean formal errors of the combined ZTD are 2.3 mm (unweighted), 4.4 mm (diagonal), 8.6 mm [variance component (VC) estimation], and 8.6 mm (operator/software impact, OSI). On the one hand, the OSI model, i.e. the inclusion of off-diagonal elements in the cofactor-matrix, considers the reapplication of observations yielding a factor of about two for mean formal errors as compared to the diagonal approach. On the other hand, the combination based on VC estimation shows large differences among the VCs and exhibits a comparable scaling of formal errors. Thus, for the combination of troposphere parameters a combination of the two extensions of the stochastic model is recommended.
ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-01-01
Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-05-01
Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/
ERIC Educational Resources Information Center
He, Yong
2013-01-01
Common test items play an important role in equating multiple test forms under the common-item nonequivalent groups design. Inconsistent item parameter estimates among common items can lead to large bias in equated scores for IRT true score equating. Current methods extensively focus on detection and elimination of outlying common items, which…
NASA Astrophysics Data System (ADS)
Hassanabadi, Amir Hossein; Shafiee, Masoud; Puig, Vicenc
2018-01-01
In this paper, sensor fault diagnosis of a singular delayed linear parameter varying (LPV) system is considered. In the considered system, the model matrices are dependent on some parameters which are real-time measurable. The case of inexact parameter measurements is considered which is close to real situations. Fault diagnosis in this system is achieved via fault estimation. For this purpose, an augmented system is created by including sensor faults as additional system states. Then, an unknown input observer (UIO) is designed which estimates both the system states and the faults in the presence of measurement noise, disturbances and uncertainty induced by inexact measured parameters. Error dynamics and the original system constitute an uncertain system due to inconsistencies between real and measured values of the parameters. Then, the robust estimation of the system states and the faults are achieved with H∞ performance and formulated with a set of linear matrix inequalities (LMIs). The designed UIO is also applicable for fault diagnosis of singular delayed LPV systems with unmeasurable scheduling variables. The efficiency of the proposed approach is illustrated with an example.
NASA Astrophysics Data System (ADS)
Shariff, Nurul Sima Mohamad; Ferdaos, Nur Aqilah
2017-08-01
Multicollinearity often leads to inconsistent and unreliable parameter estimates in regression analysis. This situation will be more severe in the presence of outliers it will cause fatter tails in the error distributions than the normal distributions. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is expected to be affected by the presence of outliers due to some assumptions imposed in the modeling procedure. Thus, the robust version of existing ridge method with some modification in the inverse matrix and the estimated response value is introduced. The performance of the proposed method is discussed and comparisons are made with several existing estimators namely, Ordinary Least Squares (OLS), ridge regression and robust ridge regression based on GM-estimates. The finding of this study is able to produce reliable parameter estimates in the presence of both multicollinearity and outliers in the data.
Robust estimation procedure in panel data model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shariff, Nurul Sima Mohamad; Hamzah, Nor Aishah
2014-06-19
The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependencemore » is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.« less
Diagnostics for generalized linear hierarchical models in network meta-analysis.
Zhao, Hong; Hodges, James S; Carlin, Bradley P
2017-09-01
Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zus, F.; Deng, Z.; Wickert, J.
2017-08-01
The impact of higher-order ionospheric effects on the estimated station coordinates and clocks in Global Navigation Satellite System (GNSS) Precise Point Positioning (PPP) is well documented in literature. Simulation studies reveal that higher-order ionospheric effects have a significant impact on the estimated tropospheric parameters as well. In particular, the tropospheric north-gradient component is most affected for low-latitude and midlatitude stations around noon. In a practical example we select a few hundred stations randomly distributed over the globe, in March 2012 (medium solar activity), and apply/do not apply ionospheric corrections in PPP. We compare the two sets of tropospheric parameters (ionospheric corrections applied/not applied) and find an overall good agreement with the prediction from the simulation study. The comparison of the tropospheric parameters with the tropospheric parameters derived from the ERA-Interim global atmospheric reanalysis shows that ionospheric corrections must be consistently applied in PPP and the orbit and clock generation. The inconsistent application results in an artificial station displacement which is accompanied by an artificial "tilting" of the troposphere. This finding is relevant in particular for those who consider advanced GNSS tropospheric products for meteorological studies.
Wicke, Jason; Dumas, Genevieve A; Costigan, Patrick A
2009-01-05
Modeling of the body segments to estimate segment inertial parameters is required in the kinetic analysis of human motion. A new geometric model for the trunk has been developed that uses various cross-sectional shapes to estimate segment volume and adopts a non-uniform density function that is gender-specific. The goal of this study was to test the accuracy of the new model for estimating the trunk's inertial parameters by comparing it to the more current models used in biomechanical research. Trunk inertial parameters estimated from dual X-ray absorptiometry (DXA) were used as the standard. Twenty-five female and 24 male college-aged participants were recruited for the study. Comparisons of the new model to the accepted models were accomplished by determining the error between the models' trunk inertial estimates and that from DXA. Results showed that the new model was more accurate across all inertial estimates than the other models. The new model had errors within 6.0% for both genders, whereas the other models had higher average errors ranging from 10% to over 50% and were much more inconsistent between the genders. In addition, there was little consistency in the level of accuracy for the other models when estimating the different inertial parameters. These results suggest that the new model provides more accurate and consistent trunk inertial estimates than the other models for both female and male college-aged individuals. However, similar studies need to be performed using other populations, such as elderly or individuals from a distinct morphology (e.g. obese). In addition, the effect of using different models on the outcome of kinetic parameters, such as joint moments and forces needs to be assessed.
Pedrini, Paolo; Bragalanti, Natalia; Groff, Claudio
2017-01-01
Recently-developed methods that integrate multiple data sources arising from the same ecological processes have typically utilized structured data from well-defined sampling protocols (e.g., capture-recapture and telemetry). Despite this new methodological focus, the value of opportunistic data for improving inference about spatial ecological processes is unclear and, perhaps more importantly, no procedures are available to formally test whether parameter estimates are consistent across data sources and whether they are suitable for integration. Using data collected on the reintroduced brown bear population in the Italian Alps, a population of conservation importance, we combined data from three sources: traditional spatial capture-recapture data, telemetry data, and opportunistic data. We developed a fully integrated spatial capture-recapture (SCR) model that included a model-based test for data consistency to first compare model estimates using different combinations of data, and then, by acknowledging data-type differences, evaluate parameter consistency. We demonstrate that opportunistic data lend itself naturally to integration within the SCR framework and highlight the value of opportunistic data for improving inference about space use and population size. This is particularly relevant in studies of rare or elusive species, where the number of spatial encounters is usually small and where additional observations are of high value. In addition, our results highlight the importance of testing and accounting for inconsistencies in spatial information from structured and unstructured data so as to avoid the risk of spurious or averaged estimates of space use and consequently, of population size. Our work supports the use of a single modeling framework to combine spatially-referenced data while also accounting for parameter consistency. PMID:28973034
Behavior data of battery and battery pack SOC estimation under different working conditions.
Zhang, Xu; Wang, Yujie; Yang, Duo; Chen, Zonghai
2016-12-01
This article provides the dataset of operating conditions of battery behavior. The constant current condition and the dynamic stress test (DST) condition were carried out to analyze the battery discharging and charging features. The datasets were achieved at room temperature, in April, 2016. The shared data contributes to clarify the battery pack state-of-charge (SOC) and the battery inconsistency, which is also shown in the article of "An on-line estimation of battery pack parameters and state-of-charge using dual filters based on pack model" (X. Zhang, Y. Wang, D. Yang, et al., 2016) [1].
Consistent realization of Celestial and Terrestrial Reference Frames
NASA Astrophysics Data System (ADS)
Kwak, Younghee; Bloßfeld, Mathis; Schmid, Ralf; Angermann, Detlef; Gerstl, Michael; Seitz, Manuela
2018-03-01
The Celestial Reference System (CRS) is currently realized only by Very Long Baseline Interferometry (VLBI) because it is the space geodetic technique that enables observations in that frame. In contrast, the Terrestrial Reference System (TRS) is realized by means of the combination of four space geodetic techniques: Global Navigation Satellite System (GNSS), VLBI, Satellite Laser Ranging (SLR), and Doppler Orbitography and Radiopositioning Integrated by Satellite. The Earth orientation parameters (EOP) are the link between the two types of systems, CRS and TRS. The EOP series of the International Earth Rotation and Reference Systems Service were combined of specifically selected series from various analysis centers. Other EOP series were generated by a simultaneous estimation together with the TRF while the CRF was fixed. Those computation approaches entail inherent inconsistencies between TRF, EOP, and CRF, also because the input data sets are different. A combined normal equation (NEQ) system, which consists of all the parameters, i.e., TRF, EOP, and CRF, would overcome such an inconsistency. In this paper, we simultaneously estimate TRF, EOP, and CRF from an inter-technique combined NEQ using the latest GNSS, VLBI, and SLR data (2005-2015). The results show that the selection of local ties is most critical to the TRF. The combination of pole coordinates is beneficial for the CRF, whereas the combination of Δ UT1 results in clear rotations of the estimated CRF. However, the standard deviations of the EOP and the CRF improve by the inter-technique combination which indicates the benefits of a common estimation of all parameters. It became evident that the common determination of TRF, EOP, and CRF systematically influences future ICRF computations at the level of several μas. Moreover, the CRF is influenced by up to 50 μas if the station coordinates and EOP are dominated by the satellite techniques.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
Multi-scale comparison of source parameter estimation using empirical Green's function approach
NASA Astrophysics Data System (ADS)
Chen, X.; Cheng, Y.
2015-12-01
Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.
Paule‐Mandel estimators for network meta‐analysis with random inconsistency effects
Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose
2017-01-01
Network meta‐analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta‐analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between‐study heterogeneity. Models for network meta‐analysis with random inconsistency effects have the dual aim of allowing for inconsistencies and estimating average treatment effects across the whole network. To date, two classical estimation methods for fitting this type of model have been developed: a method of moments that extends DerSimonian and Laird's univariate method and maximum likelihood estimation. However, the Paule and Mandel estimator is another recommended classical estimation method for univariate meta‐analysis. In this paper, we extend the Paule and Mandel method so that it can be used to fit models for network meta‐analysis with random inconsistency effects. We apply all three estimation methods to a variety of examples that have been used previously and we also examine a challenging new dataset that is highly heterogenous. We perform a simulation study based on this new example. We find that the proposed Paule and Mandel method performs satisfactorily and generally better than the previously proposed method of moments because it provides more accurate inferences. Furthermore, the Paule and Mandel method possesses some advantages over likelihood‐based methods because it is both semiparametric and requires no convergence diagnostics. Although restricted maximum likelihood estimation remains the gold standard, the proposed methodology is a fully viable alternative to this and other estimation methods. PMID:28585257
NASA Astrophysics Data System (ADS)
El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.
2016-02-01
Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate different biological parameters of phytoplanktons and zooplanktons. We analyze the performance of the filters in terms of complexity and accuracy of the state and parameters estimates.
Reboussin, Beth A.; Ialongo, Nicholas S.
2011-01-01
Summary Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder which is most often diagnosed in childhood with symptoms often persisting into adulthood. Elevated rates of substance use disorders have been evidenced among those with ADHD, but recent research focusing on the relationship between subtypes of ADHD and specific drugs is inconsistent. We propose a latent transition model (LTM) to guide our understanding of how drug use progresses, in particular marijuana use, while accounting for the measurement error that is often found in self-reported substance use data. We extend the LTM to include a latent class predictor to represent empirically derived ADHD subtypes that do not rely on meeting specific diagnostic criteria. We begin by fitting two separate latent class analysis (LCA) models by using second-order estimating equations: a longitudinal LCA model to define stages of marijuana use, and a cross-sectional LCA model to define ADHD subtypes. The LTM model parameters describing the probability of transitioning between the LCA-defined stages of marijuana use and the influence of the LCA-defined ADHD subtypes on these transition rates are then estimated by using a set of first-order estimating equations given the LCA parameter estimates. A robust estimate of the LTM parameter variance that accounts for the variation due to the estimation of the two sets of LCA parameters is proposed. Solving three sets of estimating equations enables us to determine the underlying latent class structures independently of the model for the transition rates and simplifying assumptions about the correlation structure at each stage reduces the computational complexity. PMID:21461139
NASA Astrophysics Data System (ADS)
Hu, Shun; Shi, Liangsheng; Zha, Yuanyuan; Williams, Mathew; Lin, Lin
2017-12-01
Improvements to agricultural water and crop managements require detailed information on crop and soil states, and their evolution. Data assimilation provides an attractive way of obtaining these information by integrating measurements with model in a sequential manner. However, data assimilation for soil-water-atmosphere-plant (SWAP) system is still lack of comprehensive exploration due to a large number of variables and parameters in the system. In this study, simultaneous state-parameter estimation using ensemble Kalman filter (EnKF) was employed to evaluate the data assimilation performance and provide advice on measurement design for SWAP system. The results demonstrated that a proper selection of state vector is critical to effective data assimilation. Especially, updating the development stage was able to avoid the negative effect of ;phenological shift;, which was caused by the contrasted phenological stage in different ensemble members. Simultaneous state-parameter estimation (SSPE) assimilation strategy outperformed updating-state-only (USO) assimilation strategy because of its ability to alleviate the inconsistency between model variables and parameters. However, the performance of SSPE assimilation strategy could deteriorate with an increasing number of uncertain parameters as a result of soil stratification and limited knowledge on crop parameters. In addition to the most easily available surface soil moisture (SSM) and leaf area index (LAI) measurements, deep soil moisture, grain yield or other auxiliary data were required to provide sufficient constraints on parameter estimation and to assure the data assimilation performance. This study provides an insight into the response of soil moisture and grain yield to data assimilation in SWAP system and is helpful for soil moisture movement and crop growth modeling and measurement design in practice.
An application of robust ridge regression model in the presence of outliers to real data problem
NASA Astrophysics Data System (ADS)
Shariff, N. S. Md.; Ferdaos, N. A.
2017-09-01
Multicollinearity and outliers are often leads to inconsistent and unreliable parameter estimates in regression analysis. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is believed are affected by the presence of outlier. The combination of GM-estimation and ridge parameter that is robust towards both problems is on interest in this study. As such, both techniques are employed to investigate the relationship between stock market price and macroeconomic variables in Malaysia due to curiosity of involving the multicollinearity and outlier problem in the data set. There are four macroeconomic factors selected for this study which are Consumer Price Index (CPI), Gross Domestic Product (GDP), Base Lending Rate (BLR) and Money Supply (M1). The results demonstrate that the proposed procedure is able to produce reliable results towards the presence of multicollinearity and outliers in the real data.
Page, Morgan T.; Van Der Elst, Nicholas; Hardebeck, Jeanne L.; Felzer, Karen; Michael, Andrew J.
2016-01-01
Following a large earthquake, seismic hazard can be orders of magnitude higher than the long‐term average as a result of aftershock triggering. Because of this heightened hazard, emergency managers and the public demand rapid, authoritative, and reliable aftershock forecasts. In the past, U.S. Geological Survey (USGS) aftershock forecasts following large global earthquakes have been released on an ad hoc basis with inconsistent methods, and in some cases aftershock parameters adapted from California. To remedy this, the USGS is currently developing an automated aftershock product based on the Reasenberg and Jones (1989) method that will generate more accurate forecasts. To better capture spatial variations in aftershock productivity and decay, we estimate regional aftershock parameters for sequences within the García et al. (2012) tectonic regions. We find that regional variations for mean aftershock productivity reach almost a factor of 10. We also develop a method to account for the time‐dependent magnitude of completeness following large events in the catalog. In addition to estimating average sequence parameters within regions, we develop an inverse method to estimate the intersequence parameter variability. This allows for a more complete quantification of the forecast uncertainties and Bayesian updating of the forecast as sequence‐specific information becomes available.
A mechanistic modeling and data assimilation framework for Mojave Desert ecohydrology
Ng, Gene-Hua Crystal.; Bedford, David; Miller, David
2014-01-01
This study demonstrates and addresses challenges in coupled ecohydrological modeling in deserts, which arise due to unique plant adaptations, marginal growing conditions, slow net primary production rates, and highly variable rainfall. We consider model uncertainty from both structural and parameter errors and present a mechanistic model for the shrub Larrea tridentata (creosote bush) under conditions found in the Mojave National Preserve in southeastern California (USA). Desert-specific plant and soil features are incorporated into the CLM-CN model by Oleson et al. (2010). We then develop a data assimilation framework using the ensemble Kalman filter (EnKF) to estimate model parameters based on soil moisture and leaf-area index observations. A new implementation procedure, the “multisite loop EnKF,” tackles parameter estimation difficulties found to affect desert ecohydrological applications. Specifically, the procedure iterates through data from various observation sites to alleviate adverse filter impacts from non-Gaussianity in small desert vegetation state values. It also readjusts inconsistent parameters and states through a model spin-up step that accounts for longer dynamical time scales due to infrequent rainfall in deserts. Observation error variance inflation may also be needed to help prevent divergence of estimates from true values. Synthetic test results highlight the importance of adequate observations for reducing model uncertainty, which can be achieved through data quality or quantity.
NASA Astrophysics Data System (ADS)
Teeples, Ronald; Glyer, David
1987-05-01
Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.
NASA Astrophysics Data System (ADS)
Martinsson, J.
2013-03-01
We propose methods for robust Bayesian inference of the hypocentre in presence of poor, inconsistent and insufficient phase arrival times. The objectives are to increase the robustness, the accuracy and the precision by introducing heavy-tailed distributions and an informative prior distribution of the seismicity. The effects of the proposed distributions are studied under real measurement conditions in two underground mine networks and validated using 53 blasts with known hypocentres. To increase the robustness against poor, inconsistent or insufficient arrivals, a Gaussian Mixture Model is used as a hypocentre prior distribution to describe the seismically active areas, where the parameters are estimated based on previously located events in the region. The prior is truncated to constrain the solution to valid geometries, for example below the ground surface, excluding known cavities, voids and fractured zones. To reduce the sensitivity to outliers, different heavy-tailed distributions are evaluated to model the likelihood distribution of the arrivals given the hypocentre and the origin time. Among these distributions, the multivariate t-distribution is shown to produce the overall best performance, where the tail-mass adapts to the observed data. Hypocentre and uncertainty region estimates are based on simulations from the posterior distribution using Markov Chain Monte Carlo techniques. Velocity graphs (equivalent to traveltime graphs) are estimated using blasts from known locations, and applied to reduce the main uncertainties and thereby the final estimation error. To focus on the behaviour and the performance of the proposed distributions, a basic single-event Bayesian procedure is considered in this study for clarity. Estimation results are shown with different distributions, with and without prior distribution of seismicity, with wrong prior distribution, with and without error compensation, with and without error description, with insufficient arrival times and in presence of significant outliers. A particular focus is on visual results and comparisons to give a better understanding of the Bayesian advantage and to show the effects of heavy-tailed distributions and informative prior information on real data.
Ye, Xin; Pendyala, Ram M.; Zou, Yajie
2017-01-01
A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences. PMID:29073152
Wang, Ke; Ye, Xin; Pendyala, Ram M; Zou, Yajie
2017-01-01
A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, R.W.; Porter, K.G.
Inhibitors of eucaryotes (cycloheximide and amphotericin B) and procaryotes (penicillin and chloramphenical) were used to estimate bacterivory and bacterial production in a eutrophic lake. Bacterial production appeared to be slightly greater than protozoan grazing in the aerobic waters of Lake Oglethorpe. Use of penicillin and cycloheximide yielded inconsistent results in anaerobic water and in aerobic water when bacterial production was low. Production measured by inhibiting eucaryotes with cycloheximide did not always agree with (/sup 3/H)thymidine estimates or differential filtration methods. Laboratory experiments showed that several common freshwater protozoans continued to swim and ingest bacterium-size latex beads in the presence ofmore » the eucaryote inhibitor. Penicillin also affected grazing rates of some ciliates. The authors recommended that caution and a corroborating method be used when estimating ecologically important parameters with specific inhibitors.« less
Estimation of parameters of dose volume models and their confidence limits
NASA Astrophysics Data System (ADS)
van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.
2003-07-01
Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.
Updated reduced CMB data and constraints on cosmological parameters
NASA Astrophysics Data System (ADS)
Cai, Rong-Gen; Guo, Zong-Kuan; Tang, Bo
2015-07-01
We obtain the reduced CMB data {lA, R, z∗} from WMAP9, WMAP9+BKP, Planck+WP and Planck+WP+BKP for the ΛCDM and wCDM models with or without spatial curvature. We then use these reduced CMB data in combination with low-redshift observations to put constraints on cosmological parameters. We find that including BKP results in a higher value of the Hubble constant especially when the equation of state (EOS) of dark energy and curvature are allowed to vary. For the ΛCDM model with curvature, the estimate of the Hubble constant with Planck+WP+Lensing is inconsistent with the one derived from Planck+WP+BKP at about 1.2σ confidence level (CL).
Estimating the Geocenter from GNSS Observations
NASA Astrophysics Data System (ADS)
Dach, Rolf; Michael, Meindl; Beutler, Gerhard; Schaer, Stefan; Lutz, Simon; Jäggi, Adrian
2014-05-01
The satellites of the Global Navigation Satellite Systems (GNSS) are orbiting the Earth according to the laws of celestial mechanics. As a consequence, the satellites are sensitive to the coordinates of the center of mass of the Earth. The coordinates of the (ground) tracking stations are referring to the center of figure as the conventional origin of the reference frame. The difference between the center of mass and center of figure is the instantaneous geocenter. Following this definition the global GNSS solutions are sensitive to the geocenter. Several studies demonstrated strong correlations of the GNSS-derived geocenter coordinates with parameters intended to absorb radiation pressure effects acting on the GNSS satellites, and with GNSS satellite clock parameters. One should thus pose the question to what extent these satellite-related parameters absorb (or hide) the geocenter information. A clean simulation study has been performed to answer this question. The simulation environment allows it in particular to introduce user-defined shifts of the geocenter (systematic inconsistencies between the satellite's and station's reference frames). These geocenter shifts may be recovered by the mentioned parameters - provided they were set up in the analysis. If the geocenter coordinates are not estimated, one may find out which other parameters absorb the user-defined shifts of the geocenter and to what extent. Furthermore, the simulation environment also allows it to extract the correlation matrix from the a posteriori covariance matrix to study the correlations between different parameter types of the GNSS analysis system. Our results show high degrees of correlations between geocenter coordinates, orbit-related parameters, and satellite clock parameters. These correlations are of the same order of magnitude as the correlations between station heights, troposphere, and receiver clock parameters in each regional or global GNSS network analysis. If such correlations are accepted in a GNSS analysis when estimating station coordinates, geocenter coordinates must be considered as mathematically estimable in a global GNSS analysis. The geophysical interpretation may of course become difficult, e.g., if insufficient orbit models are used.
On land-use modeling: A treatise of satellite imagery data and misclassification error
NASA Astrophysics Data System (ADS)
Sandler, Austin M.
Recent availability of satellite-based land-use data sets, including data sets with contiguous spatial coverage over large areas, relatively long temporal coverage, and fine-scale land cover classifications, is providing new opportunities for land-use research. However, care must be used when working with these datasets due to misclassification error, which causes inconsistent parameter estimates in the discrete choice models typically used to model land-use. I therefore adapt the empirical correction methods developed for other contexts (e.g., epidemiology) so that they can be applied to land-use modeling. I then use a Monte Carlo simulation, and an empirical application using actual satellite imagery data from the Northern Great Plains, to compare the results of a traditional model ignoring misclassification to those from models accounting for misclassification. Results from both the simulation and application indicate that ignoring misclassification will lead to biased results. Even seemingly insignificant levels of misclassification error (e.g., 1%) result in biased parameter estimates, which alter marginal effects enough to affect policy inference. At the levels of misclassification typical in current satellite imagery datasets (e.g., as high as 35%), ignoring misclassification can lead to systematically erroneous land-use probabilities and substantially biased marginal effects. The correction methods I propose, however, generate consistent parameter estimates and therefore consistent estimates of marginal effects and predicted land-use probabilities.
Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L
2017-01-01
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
van Diedenhoven, Bastiaan; Ackerman, Andrew S.; Fridlind, Ann M.; Cairns, Brian
2017-01-01
The use of ensemble-average values of aspect ratio and distortion parameter of hexagonal ice prisms for the estimation of ensemble-average scattering asymmetry parameters is evaluated. Using crystal aspect ratios greater than unity generally leads to ensemble-average values of aspect ratio that are inconsistent with the ensemble-average asymmetry parameters. When a definition of aspect ratio is used that limits the aspect ratio to below unity (α≤1) for both hexagonal plates and columns, the effective asymmetry parameters calculated using ensemble-average aspect ratios are generally consistent with ensemble-average asymmetry parameters, especially if aspect ratios are geometrically averaged. Ensemble-average distortion parameters generally also yield effective asymmetry parameters that are largely consistent with ensemble-average asymmetry parameters. In the case of mixtures of plates and columns, it is recommended to geometrically average the α≤1 aspect ratios and to subsequently calculate the effective asymmetry parameter using a column or plate geometry when the contribution by columns to a given mixture’s total projected area is greater or lower than 50%, respectively. In addition, we show that ensemble-average aspect ratios, distortion parameters and asymmetry parameters can generally be retrieved accurately from simulated multi-directional polarization measurements based on mixtures of varying columns and plates. However, such retrievals tend to be somewhat biased toward yielding column-like aspect ratios. Furthermore, generally large retrieval errors can occur for mixtures with approximately equal contributions of columns and plates and for ensembles with strong contributions of thin plates. PMID:28983127
Motion Estimation and Compensation Strategies in Dynamic Computerized Tomography
NASA Astrophysics Data System (ADS)
Hahn, Bernadette N.
2017-12-01
A main challenge in computerized tomography consists in imaging moving objects. Temporal changes during the measuring process lead to inconsistent data sets, and applying standard reconstruction techniques causes motion artefacts which can severely impose a reliable diagnostics. Therefore, novel reconstruction techniques are required which compensate for the dynamic behavior. This article builds on recent results from a microlocal analysis of the dynamic setting, which enable us to formulate efficient analytic motion compensation algorithms for contour extraction. Since these methods require information about the dynamic behavior, we further introduce a motion estimation approach which determines parameters of affine and certain non-affine deformations directly from measured motion-corrupted Radon-data. Our methods are illustrated with numerical examples for both types of motion.
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE. PMID:20879379
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE.
NASA Astrophysics Data System (ADS)
Ma, H.
2016-12-01
Land surface parameters from remote sensing observations are critical in monitoring and modeling of global climate change and biogeochemical cycles. Current methods for estimating land surface parameters are generally parameter-specific algorithms and are based on instantaneous physical models, which result in spatial, temporal and physical inconsistencies in current global products. Besides, optical and Thermal Infrared (TIR) remote sensing observations are usually separated to use based on different models , and the Middle InfraRed (MIR) observations have received little attention due to the complexity of the radiometric signal that mixes both reflected and emitted fluxes. In this paper, we proposed a unified algorithm for simultaneously retrieving a total of seven land surface parameters, including Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), land surface albedo, Land Surface Temperature (LST), surface emissivity, downward and upward longwave radiation, by exploiting remote sensing observations from visible to TIR domain based on a common physical Radiative Transfer (RT) model and a data assimilation framework. The coupled PROSPECT-VISIR and 4SAIL RT model were used for canopy reflectance modeling. At first, LAI was estimated using a data assimilation method that combines MODIS daily reflectance observation and a phenology model. The estimated LAI values were then input into the RT model to simulate surface spectral emissivity and surface albedo. Besides, the background albedo and the transmittance of solar radiation, and the canopy albedo were also calculated to produce FAPAR. Once the spectral emissivity of seven MODIS MIR to TIR bands were retrieved, LST can be estimated from the atmospheric corrected surface radiance by exploiting an optimization method. At last, the upward longwave radiation were estimated using the retrieved LST, broadband emissivity (converted from spectral emissivity) and the downward longwave radiation (modeled by MODTRAN). These seven parameters were validated over several representative sites with different biome type, and compared with MODIS and GLASS product. Results showed that this unified inversion algorithm can retrieve temporally complete and physical consistent land surface parameters with high accuracy.
Seismic azimuthal anisotropy beneath the eastern United States and its geodynamic implications
NASA Astrophysics Data System (ADS)
Yang, Bin B.; Liu, Yunhua; Dahm, Haider; Liu, Kelly H.; Gao, Stephen S.
2017-03-01
Systematic spatial variations of anisotropic characteristics are revealed beneath the eastern U.S. using seismic data recorded between 1988 and 2016 by 785 stations. The resulting fast polarization orientations of the 5613 measurements are generally subparallel to the absolute plate motion (APM) and are inconsistent with the strike of major tectonic features. This inconsistency, together with the results of depth estimation using the spatial coherency of the splitting parameters, suggests a mostly asthenospheric origin of the observed azimuthal anisotropy. The observations can be explained by a combined effect of APM-induced mantle fabric and a flow system deflected horizontally around the edges of the keel of the North American continent. Beneath the southern and northeastern portions of the study area, the E-W keel-deflected flow enhances APM-induced fabric and produces mostly E-W fast orientations with large splitting times, while beneath the southeastern U.S., anisotropy from the N-S oriented flow is weakened by the APM.
Strong gravitational lensing by a Konoplya-Zhidenko rotating non-Kerr compact object
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shangyun; Chen, Songbai; Jing, Jiliang, E-mail: shangyun_wang@163.com, E-mail: csb3752@hunnu.edu.cn, E-mail: jljing@hunnu.edu.cn
Konoplya and Zhidenko have proposed recently a rotating non-Kerr black hole metric beyond General Relativity and make an estimate for the possible deviations from the Kerr solution with the data of GW 150914. We here study the strong gravitational lensing in such a rotating non-Kerr spacetime with an extra deformation parameter. We find that the condition of existence of horizons is not inconsistent with that of the marginally circular photon orbit. Moreover, the deflection angle of the light ray near the weakly naked singularity covered by the marginally circular orbit diverges logarithmically in the strong-field limit. In the case ofmore » the completely naked singularity, the deflection angle near the singularity tends to a certain finite value, whose sign depends on the rotation parameter and the deformation parameter. These properties of strong gravitational lensing are different from those in the Johannsen-Psaltis rotating non-Kerr spacetime and in the Janis-Newman-Winicour spacetime. Modeling the supermassive central object of the Milk Way Galaxy as a Konoplya-Zhidenko rotating non-Kerr compact object, we estimated the numerical values of observables for the strong gravitational lensing including the time delay between two relativistic images.« less
Accumulator and random-walk models of psychophysical discrimination: a counter-evaluation.
Vickers, D; Smith, P
1985-01-01
In a recent assessment of models of psychophysical discrimination, Heath criticises the accumulator model for its reliance on computer simulation and qualitative evidence, and contrasts it unfavourably with a modified random-walk model, which yields exact predictions, is susceptible to critical test, and is provided with simple parameter-estimation techniques. A counter-evaluation is presented, in which the approximations employed in the modified random-walk analysis are demonstrated to be seriously inaccurate, the resulting parameter estimates to be artefactually determined, and the proposed test not critical. It is pointed out that Heath's specific application of the model is not legitimate, his data treatment inappropriate, and his hypothesis concerning confidence inconsistent with experimental results. Evidence from adaptive performance changes is presented which shows that the necessary assumptions for quantitative analysis in terms of the modified random-walk model are not satisfied, and that the model can be reconciled with data at the qualitative level only by making it virtually indistinguishable from an accumulator process. A procedure for deriving exact predictions for an accumulator process is outlined.
USDA-ARS?s Scientific Manuscript database
The responses of CO2 assimilation to [CO2] (A/Ci) were investigated at two developmental stages (R5 and R6) and in several soybean cultivars grown under two levels of [CO2], the ambient level of 370 µbar versus the elevated level of 550 µbar. The A/Ci data were analyzed and compared using various cu...
Charge exchange avalanche at the cometopause
NASA Astrophysics Data System (ADS)
Gombosi, T. I.
1987-11-01
A sharp transition from a solar wind proton dominated flow to a plasma population primarily consisting of relatively cold cometary heavy ions has been observed at a cometocentric distance of about 160,000 km by the VEGA and GIOTTO missions. This boundary (the cometopause) was thought to be related to charge transfer processes, but its location and thickness are inconsistent with conventionally estimated ion - neutral coupling boundaries. In this paper a two-fluid model is used to investigate the major physical processes at the cometopause. By adopting observed comet Halley parameters the model is able to reproduce the location and the thickness of this charge exchange boundary.
EnKF with closed-eye period - bridging intermittent model structural errors in soil hydrology
NASA Astrophysics Data System (ADS)
Bauser, Hannes H.; Jaumann, Stefan; Berg, Daniel; Roth, Kurt
2017-04-01
The representation of soil water movement exposes uncertainties in all model components, namely dynamics, forcing, subscale physics and the state itself. Especially model structural errors in the description of the dynamics are difficult to represent and can lead to an inconsistent estimation of the other components. We address the challenge of a consistent aggregation of information for a manageable specific hydraulic situation: a 1D soil profile with TDR-measured water contents during a time period of less than 2 months. We assess the uncertainties for this situation and detect initial condition, soil hydraulic parameters, small-scale heterogeneity, upper boundary condition, and (during rain events) the local equilibrium assumption by the Richards equation as the most important ones. We employ an iterative Ensemble Kalman Filter (EnKF) with an augmented state. Based on a single rain event, we are able to reduce all uncertainties directly, except for the intermittent violation of the local equilibrium assumption. We detect these times by analyzing the temporal evolution of estimated parameters. By introducing a closed-eye period - during which we do not estimate parameters, but only guide the state based on measurements - we can bridge these times. The introduced closed-eye period ensured constant parameters, suggesting that they resemble the believed true material properties. The closed-eye period improves predictions during periods when the local equilibrium assumption is met, but consequently worsens predictions when the assumption is violated. Such a prediction requires a description of the dynamics during local non-equilibrium phases, which remains an open challenge.
NASA Astrophysics Data System (ADS)
Liu, B.; McLean, A. D.
1989-08-01
We report the LM-2 helium dimer interaction potential, from helium separations of 1.6 Å to dissociation, obtained by careful convergence studies with respect to configuration space, through a sequence of interacting correlated fragment (ICF) wave functions, and with respect to the primitive Slater-type basis used for orbital expansion. Parameters of the LM-2 potential are re=2.969 Å, rm=2.642 Å, and De=10.94 K, in near complete agreement with those of the best experimental potential of Aziz, McCourt, and Wong [Mol. Phys. 61, 1487 (1987)], which are re=2.963 Å, rm=2.637 Å, and De=10.95 K. The computationally estimated accuracy of each point on the potential is given; at re it is 0.03 K. Extrapolation procedures used to produce the LM-2 potential make use of the orbital basis inconsistency (OBI) and configuration base inconsistency (CBI) adjustments to separated fragment energies when computing the interaction energy. These components of basis set superposition error (BSSE) are given a full discussion.
Development Of ABEC Column For Separation Of Tc-99 From Northstar Dissolved Target Solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stepinski, Dominique C.; Bennett, Megan E.; Naik, Seema R.
Batch and column breakthrough experiments were performed to determine isotherms and mass-transfer parameters for adsorption of Tc on aqueous biphasic extraction chromatographic (ABEC) sorbent in two solutions: 200 g/L Mo, 5.1 M K +, 1 M OH -, and 0.1 M NO 3 - (Solution A) and 200 g/L Mo, 9.3 M K +, 5 M OH -, and 0.1 M NO 3 - (Solution B). Good agreement was found between the isotherm values obtained by batch and column breakthrough studies for both Solutions A and B. Potassium-pertechnetate intra-particle diffusivity on ABEC resin was estimated by VERSE simulations, and goodmore » agreement was found among a series of column-breakthrough experiments at varying flow velocities, column sizes, and technetium concentrations. However, testing of 10 cc cartridges provided by NorthStar with Solutions A and B did not give satisfactory results, as significant Tc breakthrough was observed and ABEC cartridge performance varied widely among experiments. These different experimental results are believed to be due to inconsistent preparation of the ABEC resin prior to packing and/or inconsistent packing.« less
NASA Technical Reports Server (NTRS)
Treuhaft, Robert N.; Law, Beverly E.; Siqueira, Paul R.
2000-01-01
Parameters describing the vertical structure of forests, for example tree height, height-to-base-of-live-crown, underlying topography, and leaf area density, bear on land-surface, biogeochemical, and climate modeling efforts. Single, fixed-baseline interferometric synthetic aperture radar (INSAR) normalized cross-correlations constitute two observations from which to estimate forest vertical structure parameters: Cross-correlation amplitude and phase. Multialtitude INSAR observations increase the effective number of baselines potentially enabling the estimation of a larger set of vertical-structure parameters. Polarimetry and polarimetric interferometry can further extend the observation set. This paper describes the first acquisition of multialtitude INSAR for the purpose of estimating the parameters describing a vegetated land surface. These data were collected over ponderosa pine in central Oregon near longitude and latitude -121 37 25 and 44 29 56. The JPL interferometric TOPSAR system was flown at the standard 8-km altitude, and also at 4-km and 2-km altitudes, in a race track. A reference line including the above coordinates was maintained at 35 deg for both the north-east heading and the return southwest heading, at all altitudes. In addition to the three altitudes for interferometry, one line was flown with full zero-baseline polarimetry at the 8-km altitude. A preliminary analysis of part of the data collected suggests that they are consistent with one of two physical models describing the vegetation: 1) a single-layer, randomly oriented forest volume with a very strong ground return or 2) a multilayered randomly oriented volume; a homogeneous, single-layer model with no ground return cannot account for the multialtitude correlation amplitudes. Below the inconsistency of the data with a single-layer model is followed by analysis scenarios which include either the ground or a layered structure. The ground returns suggested by this preliminary analysis seem too strong to be plausible, but parameters describing a two-layer compare reasonably well to a field-measured probability distribution of tree heights in the area.
Yu-Kang, Tu
2016-12-01
Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Dose-escalation designs in oncology: ADEPT and the CRM.
Shu, Jianfen; O'Quigley, John
2008-11-20
The ADEPT software package is not a statistical method in its own right as implied by Gerke and Siedentop (Statist. Med. 2008; DOI: 10.1002/sim.3037). ADEPT implements two-parameter CRM models as described in O'Quigley et al. (Biometrics 1990; 46(1):33-48). All of the basic ideas (use of a two-parameter logistic model, use of a two-dimensional prior for the unknown slope and intercept parameters, sequential estimation and subsequent patient allocation based on minimization of some loss function, flexibility to use cohorts instead of one by one inclusion) are strictly identical. The only, and quite trivial, difference arises in the setting of the prior. O'Quigley et al. (Biometrics 1990; 46(1):33-48) used priors having an analytic expression whereas Whitehead and Brunier (Statist. Med. 1995; 14:33-48) use pseudo-data to play the role of the prior. The question of interest is whether two-parameter CRM works as well, or better, than the one-parameter CRM recommended in O'Quigley et al. (Biometrics 1990; 46(1):33-48). Gerke and Siedentop argue that it does. The published literature suggests otherwise. The conclusions of Gerke and Siedentop stem from three highly particular, and somewhat contrived, situations. Unlike one-parameter CRM (Biometrika 1996; 83:395-405; J. Statist. Plann. Inference 2006; 136:1765-1780; Biometrika 2005; 92:863-873), no statistical properties appear to have been studied for two-parameter CRM. In particular, for two-parameter CRM, the parameter estimates are inconsistent. This ought to be a source of major concern to those proposing its use. Worse still, for finite samples the behavior of estimates can be quite wild despite having incorporated the kind of dampening priors discussed by Gerke and Siedentop. An example in which we illustrate this behavior describes a single patient included at level 1 of 6 levels and experiencing a dose limiting toxicity. The subsequent recommendation is to experiment at level 6! Such problematic behavior is not common. Even so, we show that the allocation behavior of two-parameter CRM is very much less stable than that of one-parameter CRM.
NASA Astrophysics Data System (ADS)
Malyshkov, S. Y.; Gordeev, V. F.; Polyvach, V. I.; Shtalin, S. G.; Pustovalov, K. N.
2017-04-01
Article describes the results of the atmosphere and Earth’s crust climatic and ecological parameters integrated monitoring. The estimation is made for lithospheric component share in the Earth natural pulsed electromagnetic field structure. To estimate lithospheric component we performed a round-the-clock monitoring of the Earth natural pulsed electromagnetic field background variations at the experiment location and measured the Earth natural pulsed electromagnetic field under electric shields. Natural materials in a natural environment were used for shielding, specifically lakes with varying parameters of water conductivity. Skin effect was used in the experiment - it is the tendency of electromagnetic waves amplitude to decrease with greater depths in the conductor. Atmospheric and lithospheric component the Earth natural pulsed electromagnetic field data recorded on terrain was compared against the recorded data with atmosphere component decayed by an electric shield. In summary we have demonstrated in the experiment that thunderstorm discharge originating electromagnetic field decay corresponds to the decay calculated using Maxwell equations. In the absence of close lightning strikes the ratio of field intensity recorded on terrain to shielded field intensity is inconsistent with the ratio calculated for atmospheric sources, that confirms there is a lithospheric component present to the Earth natural pulsed electromagnetic field.
Travaglini, Davide; Fattorini, Lorenzo; Barbati, Anna; Bottalico, Francesca; Corona, Piermaria; Ferretti, Marco; Chirici, Gherardo
2013-04-01
A correct characterization of the status and trend of forest condition is essential to support reporting processes at national and international level. An international forest condition monitoring has been implemented in Europe since 1987 under the auspices of the International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests). The monitoring is based on harmonized methodologies, with individual countries being responsible for its implementation. Due to inconsistencies and problems in sampling design, however, the ICP Forests network is not able to produce reliable quantitative estimates of forest condition at European and sometimes at country level. This paper proposes (1) a set of requirements for status and change assessment and (2) a harmonized sampling strategy able to provide unbiased and consistent estimators of forest condition parameters and of their changes at both country and European level. Under the assumption that a common definition of forest holds among European countries, monitoring objectives, parameters of concern and accuracy indexes are stated. On the basis of fixed-area plot sampling performed independently in each country, an unbiased and consistent estimator of forest defoliation indexes is obtained at both country and European level, together with conservative estimators of their sampling variance and power in the detection of changes. The strategy adopts a probabilistic sampling scheme based on fixed-area plots selected by means of systematic or stratified schemes. Operative guidelines for its application are provided.
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi
2010-10-01
The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.
Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.
Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen
2015-05-01
Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.
NASA Astrophysics Data System (ADS)
Xiao, Guorui; Mayer, Michael; Heck, Bernhard; Sui, Lifen; Cong, Mingri
2017-04-01
Integer ambiguity resolution (AR) can significantly shorten the convergence time and improve the accuracy of Precise Point Positioning (PPP). Phase fractional cycle biases (FCB) originating from satellites destroy the integer nature of carrier phase ambiguities. To isolate the satellite FCB, observations from a global reference network are required. Firstly, float ambiguities containing FCBs are obtained by PPP processing. Secondly, the least squares method (LSM) is adopted to recover FCBs from all the float ambiguities. Finally, the estimated FCB products can be applied by the user to achieve PPP-AR. During the estimation of FCB, the LSM step can be very time-consuming, considering the large number of observations from hundreds of stations and thousands of epochs. In addition, iterations are required to deal with the one-cycle inconsistency among observations. Since the integer ambiguities are derived by directly rounding float ambiguities, the one-cycle inconsistency arises whenever the fractional parts of float ambiguities exceed the rounding boundary (e.g., 0.5 and -0.5). The iterations of LSM and the large number of observations require a long time to finish the estimation. Consequently, only a sparse global network containing a limited number of stations was processed in former research. In this paper, we propose to isolate the FCB based on a Kalman filter. The large number of observations is handled epoch-by-epoch, which significantly reduces the dimension of the involved matrix and accelerates the computation. In addition, it is also suitable for real-time applications. As for the one-cycle inconsistency, a pre-elimination method is developed to avoid the iteration of the whole process. According to the analysis of the derived satellite FCB products, we find that both wide-lane (WL) and narrow-lane (NL) FCB are very stable over time (e.g., WL FCB over several days rsp. NL FCB over tens of minutes). The stability implies that the satellite FCB can be removed by previous estimation. After subtraction of the satellite FCB, the receiver FCB can be determined. Theoretically, the receiver FCBs derived from different satellite observations should be the same for a single station. Thereby, the one-cycle inconsistency among satellites can be detected and eliminated by adjusting the corresponding receiver FCB. Here, stations can be handled individually to obtain "clean" FCB observations. In an experiment, 24 h observations from 200 stations are processed to estimate GPS FCB. The process finishes in one hour using a personal computer. The estimated WL FCB has a good consistency with existing WL FCB products (e.g., CNES, WHU-SGG). All differences are within ± 0.1 cycles, which indicates the correctness of the proposed approach. For NL FCB, all differences are within ± 0.2 cycles. Concerning the NL wavelength (10.7 cm), the slightly worse NL FCB may be ascribed to different PPP processing strategies. The state-based approach of the Kalman filter also allows for a more realistic modeling of stochastic parameters, which will be investigated in future research.
Song, Fujian; Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G
2011-08-16
To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. The study included 112 independent trial networks (including 1552 trials with 478,775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.
Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G
2011-01-01
Objective To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Design Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Main outcome measure Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. Results The study included 112 independent trial networks (including 1552 trials with 478 775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Conclusions Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence. PMID:21846695
Assessing the quality of life history information in publicly available databases.
Thorson, James T; Cope, Jason M; Patrick, Wesley S
2014-01-01
Single-species life history parameters are central to ecological research and management, including the fields of macro-ecology, fisheries science, and ecosystem modeling. However, there has been little independent evaluation of the precision and accuracy of the life history values in global and publicly available databases. We therefore develop a novel method based on a Bayesian errors-in-variables model that compares database entries with estimates from local experts, and we illustrate this process by assessing the accuracy and precision of entries in FishBase, one of the largest and oldest life history databases. This model distinguishes biases among seven life history parameters, two types of information available in FishBase (i.e., published values and those estimated from other parameters), and two taxa (i.e., bony and cartilaginous fishes) relative to values from regional experts in the United States, while accounting for additional variance caused by sex- and region-specific life history traits. For published values in FishBase, the model identifies a small positive bias in natural mortality and negative bias in maximum age, perhaps caused by unacknowledged mortality caused by fishing. For life history values calculated by FishBase, the model identified large and inconsistent biases. The model also demonstrates greatest precision for body size parameters, decreased precision for values derived from geographically distant populations, and greatest between-sex differences in age at maturity. We recommend that our bias and precision estimates be used in future errors-in-variables models as a prior on measurement errors. This approach is broadly applicable to global databases of life history traits and, if used, will encourage further development and improvements in these databases.
Nonlinear response of the immune system to power-frequency magnetic fields.
Marino, A A; Wolcott, R M; Chervenak, R; Jourd'Heuil, F; Nilsen, E; Frilot, C
2000-09-01
Studies of the effects of power-frequency electromagnetic fields (EMFs) on the immune and other body systems produced positive and negative results, and this pattern was usually interpreted to indicate the absence of real effects. However, if the biological effects of EMFs were governed by nonlinear laws, deterministic responses to fields could occur that were both real and inconsistent, thereby leading to both types of results. The hypothesis of real inconsistent effects due to EMFs was tested by exposing mice to 1 G, 60 Hz for 1-105 days and observing the effect on 20 immune parameters, using flow cytometry and functional assays. The data were evaluated by means of a novel statistical procedure that avoided averaging away oppositely directed changes in different animals, which we perceived to be the problem in some of the earlier EMF studies. The reliability of the procedure was shown using appropriate controls. In three independent experiments involving exposure for 21 or more days, the field altered lymphoid phenotype even though the changes in individual immune parameters were inconsistent. When the data were evaluated using traditional linear statistical methods, no significant difference in any immune parameter was found. We were able to mimic the results by sampling from known chaotic systems, suggesting that deterministic chaos could explain the effect of fields on the immune system. We conclude that exposure to power-frequency fields produced changes in the immune system that were both real and inconsistent.
OBJECTIVES: Estimating gestational age is usually based on date of last menstrual period (LMP) or clinical estimation (CE); both approaches introduce potential bias. Differences in methods of estimation may lead to misclassificat ion and inconsistencies in risk estimates, particu...
National scale biomass estimators for United States tree species
Jennifer C. Jenkins; David C. Chojnacky; Linda S. Heath; Richard A. Birdsey
2003-01-01
Estimates of national-scale forest carbon (C) stocks and fluxes are typically based on allometric regression equations developed using dimensional analysis techniques. However, the literature is inconsistent and incomplete with respect to large-scale forest C estimation. We compiled all available diameter-based allometric regression equations for estimating total...
Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency
Abu Bakr, Muhammad; Lee, Sukhan
2017-01-01
The paradigm of multisensor data fusion has been evolved from a centralized architecture to a decentralized or distributed architecture along with the advancement in sensor and communication technologies. These days, distributed state estimation and data fusion has been widely explored in diverse fields of engineering and control due to its superior performance over the centralized one in terms of flexibility, robustness to failure and cost effectiveness in infrastructure and communication. However, distributed multisensor data fusion is not without technical challenges to overcome: namely, dealing with cross-correlation and inconsistency among state estimates and sensor data. In this paper, we review the key theories and methodologies of distributed multisensor data fusion available to date with a specific focus on handling unknown correlation and data inconsistency. We aim at providing readers with a unifying view out of individual theories and methodologies by presenting a formal analysis of their implications. Finally, several directions of future research are highlighted. PMID:29077035
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-07
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
A Test of General Relativity with MESSENGER Mission Data
NASA Astrophysics Data System (ADS)
Genova, A.; Mazarico, E.; Goossens, S. J.; Lemoine, F. G.; Neumann, G. A.; Nicholas, J. B.; Rowlands, D. D.; Smith, D. E.; Zuber, M. T.; Solomon, S. C.
2016-12-01
The MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) spacecraft initiated collection of scientific data from the innermost planet during its first flyby of Mercury in January 2008. After two additional Mercury flybys, MESSENGER was inserted into orbit around Mercury on 18 March 2011 and operated for more than four Earth years through 30 April 2015. Data acquired during the flyby and orbital phases have provided crucial information on the formation and evolution of Mercury. The Mercury Laser Altimeter (MLA) and the radio science system, for example, obtained geodetic observations of the topography, gravity field, orientation, and tides of Mercury, which helped constrain its surface and deep interior structure. X-band radio tracking data collected by the NASA Deep Space Network (DSN) allowed the determination of Mercury's gravity field to spherical harmonic degree and order 100, as well as refinement of the planet's obliquity and estimation of the tidal Love number k2. These geophysical parameters are derived from the range-rate observables that measure precisely the motion of the spacecraft in orbit around the planet. However, the DSN stations acquired two other kinds of radio tracking data, range and delta-differential one-way ranging, which also provided precise measurements of Mercury's ephemeris. The proximity of Mercury's orbit to the Sun leads to a significant perihelion precession, which was used by Einstein as confirmation of general relativity (GR) because of its inconsistency with the effects predicted from classical Newtonian theory. MESSENGER data allow the estimation of the GR parameterized post-Newtonian (PPN) coefficients γ and β. Furthermore, determination of Mercury's orbit also allows estimation of the gravitational parameter (GM) and the flattening (J2) of the Sun. We modified our orbit determination software, NASA GSFC's GEODYN II, to enable simultaneous orbit integration of both MESSENGER and the planet Mercury. The combined estimation of both orbits leads to a more accurate estimation of Mercury's gravity field, orientation, and tides. Results for these geophysical parameters, GM and J2 for the Sun, and the PPN parameters constitute updates for all of these quantities.
Sou, Julie; Shannon, Kate; Li, Jane; Nguyen, Paul; Strathdee, Steffanie A; Shoveller, Jean; Goldenberg, Shira M
2015-06-01
Migrant women in sex work experience unique risks and protective factors related to their sexual health. Given the dearth of knowledge in high-income countries, we explored factors associated with inconsistent condom use by clients among migrant female sex workers over time in Vancouver, BC. Questionnaire and HIV/sexually transmitted infection testing data from a longitudinal cohort, An Evaluation of Sex Workers Health Access, were collected from 2010 to 2013. Logistic regression using generalized estimating equations was used to model correlates of inconsistent condom use by clients among international migrant sex workers over a 3-year study period. Of 685 participants, analyses were restricted to 182 (27%) international migrants who primarily originated from China. In multivariate generalized estimating equations analyses, difficulty accessing condoms (adjusted odds ratio [AOR], 3.76; 95% confidence interval [CI], 1.13-12.47) independently correlated with increased odds of inconsistent condom use by clients. Servicing clients in indoor sex work establishments (e.g., massage parlors) (AOR, 0.34; 95% CI, 0.15-0.77), and high school attainment (AOR, 0.22; 95% CI, 0.09-0.50) had independent protective effects on the odds of inconsistent condom use by clients. Findings of this longitudinal study highlight the persistent challenges faced by migrant sex workers in terms of accessing and using condoms. Migrant sex workers who experienced difficulty in accessing condoms were more than 3 times as likely to report inconsistent condom use by clients. Laws, policies, and programs promoting access to safer, decriminalized indoor work environments remain urgently needed to promote health, safety, and human rights for migrant workers in the sex industry.
to do so, and (5) three distinct versions of the problem of estimating component reliability from system failure-time data are treated, each resulting inconsistent estimators with asymptotically normal distributions.
Extension of D-H parameter method to hybrid manipulators used in robot-assisted surgery.
Singh, Amanpreet; Singla, Ashish; Soni, Sanjeev
2015-10-01
The main focus of this work is to extend the applicability of D-H parameter method to develop a kinematic model of a hybrid manipulator. A hybrid manipulator is a combination of open- and closed-loop chains and contains planar and spatial links. It has been found in the literature that D-H parameter method leads to ambiguities, when dealing with closed-loop chains. In this work, it has been observed that the D-H parameter method, when applied to a hybrid manipulator, results in an orientational inconsistency, because of which the method cannot be used to develop the kinematic model. In this article, the concept of dummy frames is proposed to resolve the orientational inconsistency and to develop the kinematic model of a hybrid manipulator. Moreover, the prototype of 7-degree-of-freedom hybrid manipulator, known as a surgeon-side manipulator to assist the surgeon during a medical surgery, is also developed to validate the kinematic model derived in this work. © IMechE 2015.
NASA Astrophysics Data System (ADS)
Mow, M.; Zbijewski, W.; Sisniega, A.; Xu, J.; Dang, H.; Stayman, J. W.; Wang, X.; Foos, D. H.; Koliatsos, V.; Aygun, N.; Siewerdsen, J. H.
2017-03-01
Purpose: To improve the timely detection and treatment of intracranial hemorrhage or ischemic stroke, recent efforts include the development of cone-beam CT (CBCT) systems for perfusion imaging and new approaches to estimate perfusion parameters despite slow rotation speeds compared to multi-detector CT (MDCT) systems. This work describes development of a brain perfusion CBCT method using a reconstruction of difference (RoD) approach to enable perfusion imaging on a newly developed CBCT head scanner prototype. Methods: A new reconstruction approach using RoD with a penalized-likelihood framework was developed to image the temporal dynamics of vascular enhancement. A digital perfusion simulation was developed to give a realistic representation of brain anatomy, artifacts, noise, scanner characteristics, and hemo-dynamic properties. This simulation includes a digital brain phantom, time-attenuation curves and noise parameters, a novel forward projection method for improved computational efficiency, and perfusion parameter calculation. Results: Our results show the feasibility of estimating perfusion parameters from a set of images reconstructed from slow scans, sparse data sets, and arc length scans as short as 60 degrees. The RoD framework significantly reduces noise and time-varying artifacts from inconsistent projections. Proper regularization and the use of overlapping reconstructed arcs can potentially further decrease bias and increase temporal resolution, respectively. Conclusions: A digital brain perfusion simulation with RoD imaging approach has been developed and supports the feasibility of using a CBCT head scanner for perfusion imaging. Future work will include testing with data acquired using a 3D-printed perfusion phantom currently and translation to preclinical and clinical studies.
Empirical evidence about inconsistency among studies in a pair‐wise meta‐analysis
Turner, Rebecca M.; Higgins, Julian P. T.
2015-01-01
This paper investigates how inconsistency (as measured by the I2 statistic) among studies in a meta‐analysis may differ, according to the type of outcome data and effect measure. We used hierarchical models to analyse data from 3873 binary, 5132 continuous and 880 mixed outcome meta‐analyses within the Cochrane Database of Systematic Reviews. Predictive distributions for inconsistency expected in future meta‐analyses were obtained, which can inform priors for between‐study variance. Inconsistency estimates were highest on average for binary outcome meta‐analyses of risk differences and continuous outcome meta‐analyses. For a planned binary outcome meta‐analysis in a general research setting, the predictive distribution for inconsistency among log odds ratios had median 22% and 95% CI: 12% to 39%. For a continuous outcome meta‐analysis, the predictive distribution for inconsistency among standardized mean differences had median 40% and 95% CI: 15% to 73%. Levels of inconsistency were similar for binary data measured by log odds ratios and log relative risks. Fitted distributions for inconsistency expected in continuous outcome meta‐analyses using mean differences were almost identical to those using standardized mean differences. The empirical evidence on inconsistency gives guidance on which outcome measures are most likely to be consistent in particular circumstances and facilitates Bayesian meta‐analysis with an informative prior for heterogeneity. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. PMID:26679486
ERIC Educational Resources Information Center
Zu, Jiyun; Yuan, Ke-Hai
2012-01-01
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…
NASA Astrophysics Data System (ADS)
Su, Yanzhao; Hu, Minghui; Su, Ling; Qin, Datong; Zhang, Tong; Fu, Chunyun
2018-07-01
The fuel economy of the hybrid electric vehicles (HEVs) can be effectively improved by the mode transition (MT). However, for a power-split powertrain whose power-split transmission is directly connected to the engine, the engine ripple torque (ERT), inconsistent dynamic characteristics (IDC) of engine and motors, model estimation inaccuracies (MEI), system parameter uncertainties (SPU) can cause jerk and vibration of transmission system during the MT process, which will reduce the driving comfort and the life of the drive parts. To tackle these problems, a dynamic coordinated control strategy (DCCS), including a staged engine torque feedforward and feedback estimation (ETFBC) and an active damping feedback compensation (ADBC) based on drive shaft torque estimation (DSTE), is proposed. And the effectiveness of this strategy is verified using a plant model. Firstly, the powertrain plant model is established, and the MT process and problems are analyzed. Secondly, considering the characteristics of the engine torque estimation (ETE) model before and after engine ignition, a motor torque compensation control based on the staged ERT estimation is developed. Then, considering the MEI, SPU and the load change, an ADBC based on a real-time nonlinear reduced-order robust observer of the DSTE is designed. Finally, the simulation results show that the proposed DCCS can effectively improve the driving comfort.
NASA Astrophysics Data System (ADS)
Zhang, Xu; Wang, Yujie; Liu, Chang; Chen, Zonghai
2018-02-01
An accurate battery pack state of health (SOH) estimation is important to characterize the dynamic responses of battery pack and ensure the battery work with safety and reliability. However, the different performances in battery discharge/charge characteristics and working conditions in battery pack make the battery pack SOH estimation difficult. In this paper, the battery pack SOH is defined as the change of battery pack maximum energy storage. It contains all the cells' information including battery capacity, the relationship between state of charge (SOC) and open circuit voltage (OCV), and battery inconsistency. To predict the battery pack SOH, the method of particle swarm optimization-genetic algorithm is applied in battery pack model parameters identification. Based on the results, a particle filter is employed in battery SOC and OCV estimation to avoid the noise influence occurring in battery terminal voltage measurement and current drift. Moreover, a recursive least square method is used to update cells' capacity. Finally, the proposed method is verified by the profiles of New European Driving Cycle and dynamic test profiles. The experimental results indicate that the proposed method can estimate the battery states with high accuracy for actual operation. In addition, the factors affecting the change of SOH is analyzed.
Mariel, Petr; Hoyos, David; Artabe, Alaitz; Guevara, C Angelo
2018-08-15
Endogeneity is an often neglected issue in empirical applications of discrete choice modelling despite its severe consequences in terms of inconsistent parameter estimation and biased welfare measures. This article analyses the performance of the multiple indicator solution method to deal with endogeneity arising from omitted explanatory variables in discrete choice models for environmental valuation. We also propose and illustrate a factor analysis procedure for the selection of the indicators in practice. Additionally, the performance of this method is compared with the recently proposed hybrid choice modelling framework. In an empirical application we find that the multiple indicator solution method and the hybrid model approach provide similar results in terms of welfare estimates, although the multiple indicator solution method is more parsimonious and notably easier to implement. The empirical results open a path to explore the performance of this method when endogeneity is thought to have a different cause or under a different set of indicators. Copyright © 2018 Elsevier B.V. All rights reserved.
Macher, Jan-Niklas; Rozenberg, Andrey; Pauls, Steffen U; Tollrian, Ralph; Wagner, Rüdiger; Leese, Florian
2015-01-01
Repeated Quaternary glaciations have significantly shaped the present distribution and diversity of several European species in aquatic and terrestrial habitats. To study the phylogeography of freshwater invertebrates, patterns of intraspecific variation have been examined primarily using mitochondrial DNA markers that may yield results unrepresentative of the true species history. Here, population genetic parameters were inferred for a montane aquatic caddisfly, Thremma gallicum, by sequencing a 658-bp fragment of the mitochondrial CO1 gene, and 12,514 nuclear RAD loci. T. gallicum has a highly disjunct distribution in southern and central Europe, with known populations in the Cantabrian Mountains, Pyrenees, Massif Central, and Black Forest. Both datasets represented rangewide sampling of T. gallicum. For the CO1 dataset, this included 352 specimens from 26 populations, and for the RAD dataset, 17 specimens from eight populations. We tested 20 competing phylogeographic scenarios using approximate Bayesian computation (ABC) and estimated genetic diversity patterns. Support for phylogeographic scenarios and diversity estimates differed between datasets with the RAD data favouring a southern origin of extant populations and indicating the Cantabrian Mountains and Massif Central populations to represent highly diverse populations as compared with the Pyrenees and Black Forest populations. The CO1 data supported a vicariance scenario (north–south) and yielded inconsistent diversity estimates. Permutation tests suggest that a few hundred polymorphic RAD SNPs are necessary for reliable parameter estimates. Our results highlight the potential of RAD and ABC-based hypothesis testing to complement phylogeographic studies on non-model species. PMID:25691988
Zeng, T; Zhang, H; Liu, J; Chen, L; Tian, Y; Shen, J; Lu, L
2018-03-01
The objective of this study was to estimate genetic parameters for feed efficiency and relevant traits in 2 laying duck breeds, and to determine the relationship of residual feed intake (RFI) with feed efficiency and egg quality traits. Phenotypic records on 3,000 female laying ducks (1,500 Shaoxing ducks and 1,500 Jinyun ducks) from a random mating population were used to estimate genetic parameters for RFI, feed conversion ratio (FCR), feed intake (FI), BW, BW gain (BWG), and egg mass laid (EML) at 42 to 46 wk of age. The heritability estimates for EML, FCR, FI, and RFI were 0.22, 0.19, 0.22, and 0.27 in Shaoxing ducks and 0.14, 0.19, 0.24, and 0.24 for Jinyun ducks, respectively. RFI showed high and positive genetic correlations with FCR (0.47 in Shaoxing ducks and 0.63 in Jinyun ducks) and FI (0.79 in Shaoxing ducks and 0.86 in Jinyun ducks). No correlations were found in RFI with BW, BWG, or EML at either genetic or phenotypic level. FCR was strongly and negatively correlated with EML (-0.81 and -0.68) but inconsistently correlated with FI (0.02 and 0.17), suggesting that EML was the main influence on FCR. In addition, no significant differences were found between low RFI (LRFI) and high RFI (HRFI) ducks in egg shape index, shell thickness, shell strength, yolk color, albumen height, or Haugh unit (HU). The results indicate that selection for LRFI could improve feed efficiency and reduce FI without significant changes in EML or egg quality.
Estimating mountain basin-mean precipitation from streamflow using Bayesian inference
NASA Astrophysics Data System (ADS)
Henn, Brian; Clark, Martyn P.; Kavetski, Dmitri; Lundquist, Jessica D.
2015-10-01
Estimating basin-mean precipitation in complex terrain is difficult due to uncertainty in the topographical representativeness of precipitation gauges relative to the basin. To address this issue, we use Bayesian methodology coupled with a multimodel framework to infer basin-mean precipitation from streamflow observations, and we apply this approach to snow-dominated basins in the Sierra Nevada of California. Using streamflow observations, forcing data from lower-elevation stations, the Bayesian Total Error Analysis (BATEA) methodology and the Framework for Understanding Structural Errors (FUSE), we infer basin-mean precipitation, and compare it to basin-mean precipitation estimated using topographically informed interpolation from gauges (PRISM, the Parameter-elevation Regression on Independent Slopes Model). The BATEA-inferred spatial patterns of precipitation show agreement with PRISM in terms of the rank of basins from wet to dry but differ in absolute values. In some of the basins, these differences may reflect biases in PRISM, because some implied PRISM runoff ratios may be inconsistent with the regional climate. We also infer annual time series of basin precipitation using a two-step calibration approach. Assessment of the precision and robustness of the BATEA approach suggests that uncertainty in the BATEA-inferred precipitation is primarily related to uncertainties in hydrologic model structure. Despite these limitations, time series of inferred annual precipitation under different model and parameter assumptions are strongly correlated with one another, suggesting that this approach is capable of resolving year-to-year variability in basin-mean precipitation.
Empirical evidence about inconsistency among studies in a pair-wise meta-analysis.
Rhodes, Kirsty M; Turner, Rebecca M; Higgins, Julian P T
2016-12-01
This paper investigates how inconsistency (as measured by the I 2 statistic) among studies in a meta-analysis may differ, according to the type of outcome data and effect measure. We used hierarchical models to analyse data from 3873 binary, 5132 continuous and 880 mixed outcome meta-analyses within the Cochrane Database of Systematic Reviews. Predictive distributions for inconsistency expected in future meta-analyses were obtained, which can inform priors for between-study variance. Inconsistency estimates were highest on average for binary outcome meta-analyses of risk differences and continuous outcome meta-analyses. For a planned binary outcome meta-analysis in a general research setting, the predictive distribution for inconsistency among log odds ratios had median 22% and 95% CI: 12% to 39%. For a continuous outcome meta-analysis, the predictive distribution for inconsistency among standardized mean differences had median 40% and 95% CI: 15% to 73%. Levels of inconsistency were similar for binary data measured by log odds ratios and log relative risks. Fitted distributions for inconsistency expected in continuous outcome meta-analyses using mean differences were almost identical to those using standardized mean differences. The empirical evidence on inconsistency gives guidance on which outcome measures are most likely to be consistent in particular circumstances and facilitates Bayesian meta-analysis with an informative prior for heterogeneity. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ramanjooloo, Yudish; Tholen, David J.; Fohring, Dora; Claytor, Zach; Hung, Denise
2017-10-01
The asteroid community is moving towards the implementation of a new astrometric reporting format. This new format will finally include of complementary astrometric uncertainties in the reported observations. The availability of uncertainties will allow ephemeris predictions and orbit solutions to be constrained with greater reliability, thereby improving the efficiency of the community's follow-up and recovery efforts.Our current uncertainty model involves our uncertainties in centroiding on the trailed stars and asteroid and the uncertainty due to the astrometric solution. The accuracy of our astrometric measurements are reliant on how well we can minimise the offset between the spatial and temporal centroids of the stars and the asteroid. This offset is currently unmodelled and can be caused by variations in the cloud transparency, the seeing and tracking inconsistencies. The magnitude zero point of the image, which is affected by fluctuating weather conditions and the catalog bias in the photometric magnitudes, can serve as an indicator of the presence and thickness of clouds. Through comparison of the astrometric uncertainties to the orbit solution residuals, it was apparent that a component of the error analysis remained unaccounted for, as a result of cloud coverage and thickness, telescope tracking inconsistencies and variable seeing. This work will attempt to quantify the tracking inconsistency component. We have acquired a rich dataset with the University of Hawaii 2.24 metre telescope (UH-88 inch) that is well positioned to construct an empirical estimate of the tracking inconsistency component. This work is funded by NASA grant NXX13AI64G.
SIBIS: a Bayesian model for inconsistent protein sequence estimation.
Khenoussi, Walyd; Vanhoutrève, Renaud; Poch, Olivier; Thompson, Julie D
2014-09-01
The prediction of protein coding genes is a major challenge that depends on the quality of genome sequencing, the accuracy of the model used to elucidate the exonic structure of the genes and the complexity of the gene splicing process leading to different protein variants. As a consequence, today's protein databases contain a huge amount of inconsistency, due to both natural variants and sequence prediction errors. We have developed a new method, called SIBIS, to detect such inconsistencies based on the evolutionary information in multiple sequence alignments. A Bayesian framework, combined with Dirichlet mixture models, is used to estimate the probability of observing specific amino acids and to detect inconsistent or erroneous sequence segments. We evaluated the performance of SIBIS on a reference set of protein sequences with experimentally validated errors and showed that the sensitivity is significantly higher than previous methods, with only a small loss of specificity. We also assessed a large set of human sequences from the UniProt database and found evidence of inconsistency in 48% of the previously uncharacterized sequences. We conclude that the integration of quality control methods like SIBIS in automatic analysis pipelines will be critical for the robust inference of structural, functional and phylogenetic information from these sequences. Source code, implemented in C on a linux system, and the datasets of protein sequences are freely available for download at http://www.lbgi.fr/∼julie/SIBIS. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Source encoding in multi-parameter full waveform inversion
NASA Astrophysics Data System (ADS)
Matharu, Gian; Sacchi, Mauricio D.
2018-04-01
Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.
Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan
2010-01-01
We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm’s behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method. PMID:20182556
Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan
2008-07-03
We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm's behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method.
Hydraulic Tomography and the Curse of Storativity
NASA Astrophysics Data System (ADS)
Cirpka, O. A.; Li, W.; Englert, A.
2006-12-01
Pumping tests are among the most common techniques for hydrogeological site investigation. Their traditional analysis is based on fitting analytical expressions to measured time series of drawdown. These expressions were derived for homogeneous conditions, whereas all natural aquifers are heterogeneous. The mentioned conceptual inconsistency complicates the hydrogeological interpretation of the obtained coefficients. In particularly, it has been shown that the heterogeneity of transmissivity is aliased to variability in the estimated storativity. In hydraulic tomography, multiple pumping tests are jointly analyzed. The hydraulic parameters to be estimated are allowed to fluctuate in space. For regularization, a geostatistical smoothness criterion may be introduced. Thus, the inversion results in the most likely spatial distribution of parameters that is consistent with the drawdown measurements and follows a predefined geostatistical model. Applying the restricted maximum likelihood approach, the parameters of the prior covariance function (i.e., the prior variance and correlation length) can be inferred from the data as well. We have applied the quasi-linear geostatistical approach of inverse modeling to drawdown measurements of multiple, overlapping pumping tests performed at the test site Krauthausen near Jülich, Germany. To reduce the computational costs, we have characterized the drawdown curves by their temporal moments. In the estimation of the geostatistical parameters, the measurement error of heads turned out to be of vital importance. The less we trust the data, the larger is the estimated correlation length, resulting in a more uniform distribution of transmissivity. Similar to conventional pumping test analysis, the data analysis point to a high variability of storativity although the properties making up storativity are known to be only mildly heterogeneous. We conjecture that the unresolved small-scale spatial variability of conductivity is mapped to variability of storativity. This is rather unfortunate since reliable field data on the variability of storativity are missing. The study underscores that structural information is difficult to extract from hydraulic data alone. Information on length scales and major deterministic features may be gained by geophysical surveying, even if rock-laws directly relating geophysical to hydraulic properties are considered unreliable.
Improvements in clathrate modelling: I. The H 2O-CO 2 system with various salts
NASA Astrophysics Data System (ADS)
Bakker, Ronald J.; Dubessy, Jean; Cathelineau, Michel
1996-05-01
The formation of clathrates in fluid inclusions during microthermometric measurements is typical for most natural fluid systems which include a mixture of H 2O, gases, and electrolytes. A general model is proposed which gives a complete description of the CO 2 clathrate stability field between 253-293 K and 0-200 MPa, and which can be applied to NaCl, KCl, and CaCl 2 bearing systems. The basic concept of the model is the equality of the chemical potential of H 2O in coexisting phases, after classical clathrate modelling. None of the original clathrate models had used a complete set of the most accurate values for the many parameters involved. The lack of well-defined standard conditions and of a thorough error analysis resulted in inaccurate estimation of clathrate stability conditions. According to our modifications which include the use of the most accurate parameters available, the semi-empirical model for the binary H 2O-CO 2 system is improved by the estimation of numerically optimised Kihara parameters σ = 365.9 pm and ɛ/k = 174.44 K at low pressures, and σ = 363.92 pm and e/k = 174.46 K at high pressures. Including the error indications of individual parameters involved in clathrate modelling, a range of 365.08-366.52 pm and 171.3-177.8 K allows a 2% accuracy in the modelled CO 2 clathrate formation pressure at selected temperatures below Q 2 conditions. A combination of the osmotic coefficient for binary salt-H 2O systems and Henry's constant for gas-H 2O systems is sufficiently accurate to estimate the activity of H 2O in aqueous solutions and the stability conditions of clathrate in electrolyte-bearing systems. The available data on salt-bearing systems is inconsistent, but our improved clathrate stability model is able to reproduce average values. The proposed modifications in clathrate modelling can be used to perform more accurate estimations of bulk density and composition of individual fluid inclusions from clathrate melting temperatures. Our model is included in several computer programs which can be applied to fluid inclusion studies.
NASA Astrophysics Data System (ADS)
Lane, John; Kasparis, Takis; Michaelides, Silas
2016-04-01
The well-known Z -R power law Z = ARb uses two parameters, A and b, in order to relate rainfall rate R to measured weather radar reflectivity Z. A common method used by researchers is to compute Z and R from disdrometer data and then extract the A-bparameter pair from a log-linear line fit to a scatter plot of Z -R pairs. Even though it may seem far more truthful to extract the parameter pair from a fit of radar ZR versus gauge rainfall rate RG, the extreme difference in spatial and temporal sampling volumes between radar and rain gauge creates a slew of problems that can generally only be solved by using rain gauge arrays and long sampling averages. Disdrometer derived A - b parameters are easily obtained and can provide information for the study of stratiform versus convective rainfall. However, an inconsistency appears when comparing averaged A - b pairs from various researchers. Values of b range from 1.26 to 1.51 for both stratiform and convective events. Paradoxically the values of Afall into three groups: 150 to 200 for convective; 200 to 400 for stratiform; and 400 to 500 again for convective. This apparent inconsistency can be explained by computing the A - b pair using the gamma DSD coupled with a modified drop terminal velocity model, v(D) = αDβ - w, where w is a somewhat artificial constant vertical velocity of the air above the disdrometer. This model predicts three regions of A, corresponding to w < 0, w = 0, and w > 0, which approximately matches observed data.
THE NANOGRAV NINE-YEAR DATA SET: EXCESS NOISE IN MILLISECOND PULSAR ARRIVAL TIMES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, M. T.; Jones, M. L.; McLaughlin, M. A.
Gravitational wave (GW) astronomy using a pulsar timing array requires high-quality millisecond pulsars (MSPs), correctable interstellar propagation delays, and high-precision measurements of pulse times of arrival. Here we identify noise in timing residuals that exceeds that predicted for arrival time estimation for MSPs observed by the North American Nanohertz Observatory for Gravitational Waves. We characterize the excess noise using variance and structure function analyses. We find that 26 out of 37 pulsars show inconsistencies with a white-noise-only model based on the short timescale analysis of each pulsar, and we demonstrate that the excess noise has a red power spectrum formore » 15 pulsars. We also decompose the excess noise into chromatic (radio-frequency-dependent) and achromatic components. Associating the achromatic red-noise component with spin noise and including additional power-spectrum-based estimates from the literature, we estimate a scaling law in terms of spin parameters (frequency and frequency derivative) and data-span length and compare it to the scaling law of Shannon and Cordes. We briefly discuss our results in terms of detection of GWs at nanohertz frequencies.« less
Contrasting lexical similarity and formal definitions in SNOMED CT: consistency and implications.
Agrawal, Ankur; Elhanan, Gai
2014-02-01
To quantify the presence of and evaluate an approach for detection of inconsistencies in the formal definitions of SNOMED CT (SCT) concepts utilizing a lexical method. Utilizing SCT's Procedure hierarchy, we algorithmically formulated similarity sets: groups of concepts with similar lexical structure of their fully specified name. We formulated five random samples, each with 50 similarity sets, based on the same parameter: number of parents, attributes, groups, all the former as well as a randomly selected control sample. All samples' sets were reviewed for types of formal definition inconsistencies: hierarchical, attribute assignment, attribute target values, groups, and definitional. For the Procedure hierarchy, 2111 similarity sets were formulated, covering 18.1% of eligible concepts. The evaluation revealed that 38 (Control) to 70% (Different relationships) of similarity sets within the samples exhibited significant inconsistencies. The rate of inconsistencies for the sample with different relationships was highly significant compared to Control, as well as the number of attribute assignment and hierarchical inconsistencies within their respective samples. While, at this time of the HITECH initiative, the formal definitions of SCT are only a minor consideration, in the grand scheme of sophisticated, meaningful use of captured clinical data, they are essential. However, significant portion of the concepts in the most semantically complex hierarchy of SCT, the Procedure hierarchy, are modeled inconsistently in a manner that affects their computability. Lexical methods can efficiently identify such inconsistencies and possibly allow for their algorithmic resolution. Copyright © 2013 Elsevier Inc. All rights reserved.
Bancej, Christina M; Maxwell, Colleen J; Snider, Judy
2004-01-01
Background Self-reported information has commonly been used to monitor mammography utilization across populations and time periods. However, longitudinal investigations regarding the prevalence and determinants of inconsistent responses over time and the impact of such responses on population screening estimates are lacking. Methods Based on longitudinal panel data for a representative cohort of Canadian women aged 40+ years (n = 3,537) assessed in the 1994–95 (baseline) and 1996–97 (follow-up) National Population Health Survey (NPHS), we examined the prevalence of inconsistent self-reports of mammography utilization. Logistic regression models were used to estimate the associations between women's baseline sociodemographic and health characteristics and 2 types of inconsistent responses: (i) baseline reports of ever use which were subsequently contradicted by follow-up reports of never use; and (ii) baseline reports of never use which were contradicted by follow-up reports of use prior to 1994–95. Results Among women who reported having a mammogram at baseline, 5.9% (95% confidence interval (CI): 4.6–7.3%) reported at follow-up that they had never had one. Multivariate logistic regression analyses showed that women with such inconsistent responses were more often outside target age groups, from low income households and less likely to report hormone replacement therapy and Pap smear use. Among women reporting never use at baseline and ever use at follow-up, 17.4% (95%CI: 11.7–23.1%) reported their most recent mammogram as occurring prior to 1994–95 (baseline) and such responses were more common among women aged 70+ years and those in poorer health. Conclusions Women with inconsistent responses of type (i), i.e., ever users at baseline but never users at follow-up, appeared to exhibit characteristics typical of never users of mammography screening. Although limited by sample size, our preliminary analyses suggest that type (ii) responses are more likely to be the result of recall bias due to competing morbidity and age. Inconsistent responses, if removed from the analyses, may be a greater source of loss to follow-up than deaths/institutionalization or item non-response. PMID:15541176
McTavish, Emily Jane; Steel, Mike; Holder, Mark T
2015-12-01
Statistically consistent estimation of phylogenetic trees or gene trees is possible if pairwise sequence dissimilarities can be converted to a set of distances that are proportional to the true evolutionary distances. Susko et al. (2004) reported some strikingly broad results about the forms of inconsistency in tree estimation that can arise if corrected distances are not proportional to the true distances. They showed that if the corrected distance is a concave function of the true distance, then inconsistency due to long branch attraction will occur. If these functions are convex, then two "long branch repulsion" trees will be preferred over the true tree - though these two incorrect trees are expected to be tied as the preferred true. Here we extend their results, and demonstrate the existence of a tree shape (which we refer to as a "twisted Farris-zone" tree) for which a single incorrect tree topology will be guaranteed to be preferred if the corrected distance function is convex. We also report that the standard practice of treating gaps in sequence alignments as missing data is sufficient to produce non-linear corrected distance functions if the substitution process is not independent of the insertion/deletion process. Taken together, these results imply inconsistent tree inference under mild conditions. For example, if some positions in a sequence are constrained to be free of substitutions and insertion/deletion events while the remaining sites evolve with independent substitutions and insertion/deletion events, then the distances obtained by treating gaps as missing data can support an incorrect tree topology even given an unlimited amount of data. Copyright © 2015 Elsevier Inc. All rights reserved.
Measuring the economic value of wildlife: a caution
T. H. Stevens
1992-01-01
Wildlife values appear to be very sensitive to whether species are evaluated separately or together, and value estimates often seem inconsistent with neoclassical economic theory. Wildlife value estimates must therefore be used with caution. Additional research about the nature of individual value structures for wildlife is needed.
Investigation of flow and transport processes at the MADE site using ensemble Kalman filter
Liu, Gaisheng; Chen, Y.; Zhang, Dongxiao
2008-01-01
In this work the ensemble Kalman filter (EnKF) is applied to investigate the flow and transport processes at the macro-dispersion experiment (MADE) site in Columbus, MS. The EnKF is a sequential data assimilation approach that adjusts the unknown model parameter values based on the observed data with time. The classic advection-dispersion (AD) and the dual-domain mass transfer (DDMT) models are employed to analyze the tritium plume during the second MADE tracer experiment. The hydraulic conductivity (K), longitudinal dispersivity in the AD model, and mass transfer rate coefficient and mobile porosity ratio in the DDMT model, are estimated in this investigation. Because of its sequential feature, the EnKF allows for the temporal scaling of transport parameters during the tritium concentration analysis. Inverse simulation results indicate that for the AD model to reproduce the extensive spatial spreading of the tritium observed in the field, the K in the downgradient area needs to be increased significantly. The estimated K in the AD model becomes an order of magnitude higher than the in situ flowmeter measurements over a large portion of media. On the other hand, the DDMT model gives an estimation of K that is much more comparable with the flowmeter values. In addition, the simulated concentrations by the DDMT model show a better agreement with the observed values. The root mean square (RMS) between the observed and simulated tritium plumes is 0.77 for the AD model and 0.45 for the DDMT model at 328 days. Unlike the AD model, which gives inconsistent K estimates at different times, the DDMT model is able to invert the K values that consistently reproduce the observed tritium concentrations through all times. ?? 2008 Elsevier Ltd. All rights reserved.
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.
improve transition, they will be different from the ones published on the PFDS around volumes' boundaries . 2.7 Why do I see inconsistencies in some NOAA Atlas 14 estimates at boundaries of different NOAA Atlas different times in volumes based on state boundaries, some differences in estimates between volumes at
A human microdose study of the antimalarial drug GSK3191607 in healthy volunteers.
Okour, Malek; Derimanov, Geo; Barnett, Rodger; Fernandez, Esther; Ferrer, Santiago; Gresham, Stephanie; Hossain, Mohammad; Gamo, Francisco-Javier; Koh, Gavin; Pereira, Adrian; Rolfe, Katie; Wong, Deborah; Young, Graeme; Rami, Harshad; Haselden, John
2018-03-01
GSK3191607, a novel inhibitor of the Plasmodium falciparum ATP4 (PfATP4) pathway, is being considered for development in humans. However, a key problem encountered during the preclinical evaluation of the compound was its inconsistent pharmacokinetic (PK) profile across preclinical species (mouse, rat and dog), which prevented reliable prediction of PK parameters in humans and precluded a well-founded assessment of the potential for clinical development of the compound. Therefore, an open-label microdose (100 μg, six subjects) first time in humans study was conducted to assess the human PK of GSK3191607 following intravenous administration of [14C]-GSK3191607. A human microdose study was conducted to investigate the clinical PK of GSK3191607 and enable a Go/No Go decision on further progression of the compound. The PK disposition parameters estimated from the microdose study, combined with preclinical in vitro and in vivo pharmacodynamic parameters, were all used to estimate the potential efficacy of various oral dosing regimens in humans. The PK profile, based on the microdose data, demonstrated a half-life (~17 h) similar to other antimalarial compounds currently in clinical development. However, combining the microdose data with the pharmacodynamic data provided results that do not support further clinical development of the compound for a single dose cure. The information generated by this study provides a basis for predicting the expected oral PK profiles of GSK3191607 in man and supports decisions on the future clinical development of the compound. © 2017 The British Pharmacological Society.
MacDonald, Stuart W S; Hultsch, David F; Bunce, David
2006-07-01
Intraindividual performance variability, or inconsistency, has been shown to predict neurological status, physiological functioning, and age differences and declines in cognition. However, potential moderating factors of inconsistency are not well understood. The present investigation examined whether inconsistency in vigilance response latencies varied as a function of time-on-task and task demands by degrading visual stimuli in three separate conditions (10%, 20%, and 30%). Participants were 24 younger women aged 21 to 30 years (M = 24.04, SD = 2.51) and 23 older women aged 61 to 83 years (M = 68.70, SD = 6.38). A measure of within-person inconsistency, the intraindividual standard deviation (ISD), was computed for each individual across reaction time (RT) trials (3 blocks of 45 event trials) for each condition of the vigilance task. Greater inconsistency was observed with increasing stimulus degradation and age, even after controlling for group differences in mean RTs and physical condition. Further, older adults were more inconsistent than younger adults for similar degradation conditions, with ISD scores for younger adults in the 30% condition approximating estimates observed for older adults in the 10% condition. Finally, a measure of perceptual sensitivity shared increasing negative associations with ISDs, with this association further modulated as a function of age but to a lesser degree by degradation condition. Results support current hypotheses suggesting that inconsistency serves as a marker of neurological integrity and are discussed in terms of potential underlying mechanisms.
Random sampling and validation of covariance matrices of resonance parameters
NASA Astrophysics Data System (ADS)
Plevnik, Lucijan; Zerovnik, Gašper
2017-09-01
Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.
Relative-Error-Covariance Algorithms
NASA Technical Reports Server (NTRS)
Bierman, Gerald J.; Wolff, Peter J.
1991-01-01
Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.
48 CFR 1552.215-72 - Instructions for the Preparation of Proposals.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., “Payments Under Time and Materials and Labor-Hour Contracts,” include in the cost proposal the estimated... to reflect the Government's estimate of the offeror's probable costs. Any inconsistency, whether real... hours are the workable hours required by the Government and do not include release time (i.e., holidays...
Simultaneous inversion of multiple land surface parameters from MODIS optical-thermal observations
NASA Astrophysics Data System (ADS)
Ma, Han; Liang, Shunlin; Xiao, Zhiqiang; Shi, Hanyu
2017-06-01
Land surface parameters from remote sensing observations are critical in monitoring and modeling of global climate change and biogeochemical cycles. Current methods for estimating land surface variables usually focus on individual parameters separately even from the same satellite observations, resulting in inconsistent products. Moreover, no efforts have been made to generate global products from integrated observations from the optical to Thermal InfraRed (TIR) spectrum. Particularly, Middle InfraRed (MIR) observations have received little attention due to the complexity of the radiometric signal, which contains both reflected and emitted radiation. In this paper, we propose a unified algorithm for simultaneously retrieving six land surface parameters - Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), land surface albedo, Land Surface Emissivity (LSE), Land Surface Temperature (LST), and Upwelling Longwave radiation (LWUP) by exploiting MODIS visible-to-TIR observations. We incorporate a unified physical radiative transfer model into a data assimilation framework. The MODIS visible-to-TIR time series datasets include the daily surface reflectance product and MIR-to-TIR surface radiance, which are atmospherically corrected from the MODIS data using the Moderate Resolution Transmittance program (MODTRAN, ver. 5.0). LAI was first estimated using a data assimilation method that combines MODIS daily reflectance data and a LAI phenology model, and then the LAI was input to the unified radiative transfer model to simulate spectral surface reflectance and surface emissivity for calculating surface broadband albedo and emissivity, and FAPAR. LST was estimated from the MIR-TIR surface radiance data and the simulated emissivity, using an iterative optimization procedure. Lastly, LWUP was estimated using the LST and surface emissivity. The retrieved six parameters were extensively validated across six representative sites with different biome types, and compared with MODIS, GLASS, and GlobAlbedo land surface products. The results demonstrate that the unified inversion algorithm can retrieve temporally complete and physically consistent land surface parameters, and provides more accurate estimates of surface albedo, LST, and LWUP than existing products, with R2 values of 0.93 and 0.62, RMSE of 0.029 and 0.037, and BIAS values of 0.016 and 0.012 for the retrieved and MODIS albedo products, respectively, compared with field albedo measurements; R2 values of 0.95 and 0.93, RMSE of 2.7 and 4.2 K, and BIAS values of -0.6 and -2.7 K for the retrieved and MODIS LST products, respectively, compared with field LST measurements; and R2 values of 0.93 and 0.94, RMSE of 18.2 and 22.8 W/m2, and BIAS values of -2.7 and -14.6 W/m2 for the retrieved and MODIS LWUP products, respectively, compared with field LWUP measurements.
Nelson, Nathaniel W; Anderson, Carolyn R; Thuras, Paul; Kehle-Forbes, Shannon M; Arbisi, Paul A; Erbes, Christopher R; Polusny, Melissa A
2015-03-01
Estimates of the prevalence of mild traumatic brain injury (mTBI) among military personnel and combat veterans rely almost exclusively on retrospective self-reports; however, reliability of these reports has received little attention. To examine the consistency of reporting of mTBI over time and identify factors associated with inconsistent reporting. A longitudinal cohort of 948 US National Guard Soldiers deployed to Iraq completed self-report questionnaire screening for mTBI and psychological symptoms while in-theatre 1 month before returning home (time 1, T1) and 1 year later (time 2, T2). Most respondents (n = 811, 85.5%) were consistent in their reporting of mTBI across time. Among those who were inconsistent in their reports (n = 137, 14.5%), the majority denied mTBI at T1 and affirmed mTBI at T2 (n = 123, 89.8%). Respondents rarely endorsed mTBI in-theatre and later denied mTBI (n = 14, 10.2% of those with inconsistent reports). Post-deployment post-traumatic stress symptoms and non-specific physical complaints were significantly associated with inconsistent report of mTBI. Military service members' self-reports of mTBI are generally consistent over time; however, inconsistency in retrospective self-reporting of mTBI status is associated with current post-traumatic stress symptoms and non-specific physical health complaints. Royal College of Psychiatrists.
Halliday, Drew W R; Stawski, Robert S; MacDonald, Stuart W S
2017-02-01
Response time inconsistency (RTI) in cognitive performance predicts deleterious health outcomes in late-life; however, RTI estimates are often confounded by additional influences (e.g., individual differences in learning). Finger tapping is a basic sensorimotor measure largely independent of higher-order cognition that may circumvent such confounds of RTI estimates. We examined the within-person coupling of finger-tapping mean and RTI on working memory, and the moderation of these associations by cognitive status. A total of 262 older adults were recruited and classified as controls, cognitively-impaired-not-demented (CIND) unstable or CIND stable. Participants completed finger-tapping and working-memory tasks during multiple weekly assessments, repeated annually for 4 years. Within-person coupling estimates from multilevel models indicated that on occasions when RTI was greater, working-memory response latency was slower for the CIND-stable, but not for the CIND-unstable or control individuals. The finger-tapping task shows potential for minimizing confounds on RTI estimates, and for yielding RTI estimates sensitive to central nervous system function and cognitive status. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Buchin, Kevin; Sijben, Stef; van Loon, E Emiel; Sapir, Nir; Mercier, Stéphanie; Marie Arseneau, T Jean; Willems, Erik P
2015-01-01
The Brownian bridge movement model (BBMM) provides a biologically sound approximation of the movement path of an animal based on discrete location data, and is a powerful method to quantify utilization distributions. Computing the utilization distribution based on the BBMM while calculating movement parameters directly from the location data, may result in inconsistent and misleading results. We show how the BBMM can be extended to also calculate derived movement parameters. Furthermore we demonstrate how to integrate environmental context into a BBMM-based analysis. We develop a computational framework to analyze animal movement based on the BBMM. In particular, we demonstrate how a derived movement parameter (relative speed) and its spatial distribution can be calculated in the BBMM. We show how to integrate our framework with the conceptual framework of the movement ecology paradigm in two related but acutely different ways, focusing on the influence that the environment has on animal movement. First, we demonstrate an a posteriori approach, in which the spatial distribution of average relative movement speed as obtained from a "contextually naïve" model is related to the local vegetation structure within the monthly ranging area of a group of wild vervet monkeys. Without a model like the BBMM it would not be possible to estimate such a spatial distribution of a parameter in a sound way. Second, we introduce an a priori approach in which atmospheric information is used to calculate a crucial parameter of the BBMM to investigate flight properties of migrating bee-eaters. This analysis shows significant differences in the characteristics of flight modes, which would have not been detected without using the BBMM. Our algorithm is the first of its kind to allow BBMM-based computation of movement parameters beyond the utilization distribution, and we present two case studies that demonstrate two fundamentally different ways in which our algorithm can be applied to estimate the spatial distribution of average relative movement speed, while interpreting it in a biologically meaningful manner, across a wide range of environmental scenarios and ecological contexts. Therefore movement parameters derived from the BBMM can provide a powerful method for movement ecology research.
Response Error in Reporting Dental Coverage by Older Americans in the Health and Retirement Study
Manski, Richard J.; Mathiowetz, Nancy A.; Campbell, Nancy; Pepper, John V.
2014-01-01
The aim of this research was to analyze the inconsistency in responses to survey questions within the Health and Retirement Study (HRS) regarding insurance coverage of dental services. Self-reports of dental coverage in the dental services section were compared with those in the insurance section of the 2002 HRS to identify inconsistent responses. Logistic regression identified characteristics of persons reporting discrepancies and assessed the effect of measurement error on dental coverage coefficient estimates in dental utilization models. In 18% of cases, data reported in the insurance section contradicted data reported in the dental use section of the HRS by those who said insurance at least partially covered (or would have covered) their (hypothetical) dental use. Additional findings included distinct characteristics of persons with potential reporting errors and a downward bias to the regression coefficient for coverage in a dental use model without controls for inconsistent self-reports of coverage. This study offers evidence for the need to validate self-reports of dental insurance coverage among a survey population of older Americans to obtain more accurate estimates of coverage and its impact on dental utilization. PMID:25428430
Numerical simulations of high-energy flows in accreting magnetic white dwarfs
NASA Astrophysics Data System (ADS)
Van Box Som, Lucile; Falize, É.; Bonnet-Bidaud, J.-M.; Mouchet, M.; Busschaert, C.; Ciardi, A.
2018-01-01
Some polars show quasi-periodic oscillations (QPOs) in their optical light curves that have been interpreted as the result of shock oscillations driven by the cooling instability. Although numerical simulations can recover this physics, they wrongly predict QPOs in the X-ray luminosity and have also failed to reproduce the observed frequencies, at least for the limited range of parameters explored so far. Given the uncertainties on the observed polar parameters, it is still unclear whether simulations can reproduce the observations. The aim of this work is to study QPOs covering all relevant polars showing QPOs. We perform numerical simulations including gravity, cyclotron and bremsstrahlung radiative losses, for a wide range of polar parameters, and compare our results with the astronomical data using synthetic X-ray and optical luminosities. We show that shock oscillations are the result of complex shock dynamics triggered by the interplay of two radiative instabilities. The secondary shock forms at the acoustic horizon in the post-shock region in agreement with our estimates from steady-state solutions. We also demonstrate that the secondary shock is essential to sustain the accretion shock oscillations at the average height predicted by our steady-state accretion model. Finally, in spite of the large explored parameter space, matching the observed QPO parameters requires a combination of parameters inconsistent with the observed ones. This difficulty highlights the limits of one-dimensional simulations, suggesting that multi-dimensional effects are needed to understand the non-linear dynamics of accretion columns in polars and the origins of QPOs.
Calculating wave-generated bottom orbital velocities from surface-wave parameters
Wiberg, P.L.; Sherwood, C.R.
2008-01-01
Near-bed wave orbital velocities and shear stresses are important parameters in many sediment-transport and hydrodynamic models of the coastal ocean, estuaries, and lakes. Simple methods for estimating bottom orbital velocities from surface-wave statistics such as significant wave height and peak period often are inaccurate except in very shallow water. This paper briefly reviews approaches for estimating wave-generated bottom orbital velocities from near-bed velocity data, surface-wave spectra, and surface-wave parameters; MATLAB code for each approach is provided. Aspects of this problem have been discussed elsewhere. We add to this work by providing a method for using a general form of the parametric surface-wave spectrum to estimate bottom orbital velocity from significant wave height and peak period, investigating effects of spectral shape on bottom orbital velocity, comparing methods for calculating bottom orbital velocity against values determined from near-bed velocity measurements at two sites on the US east and west coasts, and considering the optimal representation of bottom orbital velocity for calculations of near-bed processes. Bottom orbital velocities calculated using near-bed velocity data, measured wave spectra, and parametric spectra for a site on the northern California shelf and one in the mid-Atlantic Bight compare quite well and are relatively insensitive to spectral shape except when bimodal waves are present with maximum energy at the higher-frequency peak. These conditions, which are most likely to occur at times when bottom orbital velocities are small, can be identified with our method as cases where the measured wave statistics are inconsistent with Donelan's modified form of the Joint North Sea Wave Project (JONSWAP) spectrum. We define the 'effective' forcing for wave-driven, near-bed processes as the product of the magnitude of forcing times its probability of occurrence, and conclude that different bottom orbital velocity statistics may be appropriate for different problems. ?? 2008 Elsevier Ltd.
Parameter estimation for lithium ion batteries
NASA Astrophysics Data System (ADS)
Santhanagopalan, Shriram
With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of road conditions is important. An algorithm to predict the SOC in time intervals as small as 5 ms is of critical demand. In such cases, the conventional non-linear estimation procedure is not time-effective. There exist methodologies in the literature, such as those based on fuzzy logic; however, these techniques require a lot of computational storage space. Consequently, it is not possible to implement such techniques on a micro-chip for integration as a part of a real-time device. The Extended Kalman Filter (EKF) based approach presented in this work is a first step towards developing an efficient method to predict online, the State of Charge of a lithium ion cell based on an electrochemical model. The final part of the dissertation focuses on incorporating uncertainty in parameter values into electrochemical models using the polynomial chaos theory (PCT).
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
NASA Astrophysics Data System (ADS)
Ilyas, Usman; Rawat, R. S.; Tan, T. L.
2013-10-01
This paper reports the tailoring of acceptor defects in oxygen rich ZnO thin films at different post-deposition annealing temperatures (500-800°C) and Mn doping concentrations. The XRD spectra exhibited the nanocrystalline nature of ZnO thin films along with inconsistent variation in lattice parameters suggesting the temperature-dependent activation of structural defects. Photoluminescence emission spectra revealed the temperature dependent variation in deep level emissions (DLE) with the presence of acceptors as dominating defects. The concentration of native defects was estimated to be increased with temperature while a reverse trend was observed for those with increasing doping concentration. A consistent decrease in DLE spectra, with increasing Mn content, revealed the quenching of structural defects in the optical band gap of ZnO favorable for good quality thin films with enhanced optical transparency.
NASA Astrophysics Data System (ADS)
Jarvis, M. J.; Jenkins, B.; Rodgers, G. A.
1998-09-01
F region peak heights, derived from ionospheric scaled parameters through 38-year data series from both Argentine Islands (65°S, 64°W) and Port Stanley (52°S, 58°W) have been analyzed for signatures of secular change. Long-term changes in altitude, which vary with month and time of day, were found at both sites. The results can be interpreted either as a constant decrease in altitude combined with a decreasing thermospheric wind effect or as a constant decrease in altitude which is altitude-dependent. Both interpretations leave inconsistencies when the results from the two sites are compared. The estimated long-term decrease in altitude is of a similar order of magnitude to that which has been predicted to result in the thermosphere from anthropogenic change related to greenhouse gases. Other possibilities should not, however, be ruled out.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malo, Lison; Doyon, René; Albert, Loïc
2014-09-01
Based on high-resolution optical spectra obtained with ESPaDOnS at Canada-France-Hawaii Telescope, we determine fundamental parameters (T {sub eff}, R, L {sub bol}, log g, and metallicity) for 59 candidate members of nearby young kinematic groups. The candidates were identified through the BANYAN Bayesian inference method of Malo et al., which takes into account the position, proper motion, magnitude, color, radial velocity, and parallax (when available) to establish a membership probability. The derived parameters are compared to Dartmouth magnetic evolutionary models and field stars with the goal of constraining the age of our candidates. We find that, in general, low-mass starsmore » in our sample are more luminous and have inflated radii compared to older stars, a trend expected for pre-main-sequence stars. The Dartmouth magnetic evolutionary models show a good fit to observations of field K and M stars, assuming a magnetic field strength of a few kG, as typically observed for cool stars. Using the low-mass members of the β Pictoris moving group, we have re-examined the age inconsistency problem between lithium depletion age and isochronal age (Hertzspring-Russell diagram). We find that the inclusion of the magnetic field in evolutionary models increases the isochronal age estimates for the K5V-M5V stars. Using these models and field strengths, we derive an average isochronal age between 15 and 28 Myr and we confirm a clear lithium depletion boundary from which an age of 26 ± 3 Myr is derived, consistent with previous age estimates based on this method.« less
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Development of an Inconsistent Responding Scale for the Triarchic Psychopathy Measure.
Mowle, Elyse N; Kelley, Shannon E; Edens, John F; Donnellan, M Brent; Smith, Shannon Toney; Wygant, Dustin B; Sellbom, Martin
2017-08-01
Inconsistent or careless responding to self-report measures is estimated to occur in approximately 10% of university research participants and may be even more common among offender populations. Inconsistent responding may be a result of a number of factors including inattentiveness, reading or comprehension difficulties, and cognitive impairment. Many stand-alone personality scales used in applied and research settings, however, do not include validity indicators to help identify inattentive response patterns. Using multiple archival samples, the current study describes the development of an inconsistent responding scale for the Triarchic Psychopathy Measure (TriPM; Patrick, 2010), a widely used self-report measure of psychopathy. We first identified pairs of correlated TriPM items in a derivation sample (N = 2,138) and then created a total score based on the sum of the absolute value of the differences for each item pair. The resulting scale, the Triarchic Assessment Procedure for Inconsistent Responding (TAPIR), strongly differentiated between genuine TriPM protocols and randomly generated TriPM data (N = 1,000), as well as between genuine protocols and those in which 50% of the original data were replaced with random item responses. TAPIR scores demonstrated fairly consistent patterns of association with some theoretically relevant correlates (e.g., inconsistency scales embedded in other personality inventories), although not others (e.g., measures of conscientiousness) across our cross-validation samples. Tentative TAPIR cut scores that may discriminate between attentively and carelessly completed protocols are presented. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Consistency between direct and indirect trial evidence: is direct evidence always more reliable?
Madan, Jason; Stevenson, Matt D; Cooper, Katy L; Ades, A E; Whyte, Sophie; Akehurst, Ron
2011-01-01
To present a case study involving the reduction in incidence of febrile neutropenia (FN) after chemotherapy with granulocyte colony-stimulating factors (G-CSFs), illustrating difficulties that may arise when following the common preference for direct evidence over indirect evidence. Evidence of the efficacy of treatments was identified from two previous systematic reviews. We used Bayesian evidence synthesis to estimate relative treatment effects based on direct evidence, indirect evidence, and both pooled together. We checked for inconsistency between direct and indirect evidence and explored the role of one specific trial using cross-validation. A subsequent review identified further studies not available at the time of the original analysis. We repeated the analyses on the enlarged evidence base. We found substantial inconsistency in the original evidence base. The median odds ratio of FN for primary pegfilgrastim versus no primary G-CSF was 0.06 (95% credible interval: 0.02-0.19) based on direct evidence, but 0.27 (95% credible interval: 0.13-0.53) based on indirect evidence (P value for consistency hypothesis 0.027). The additional trials were consistent with the earlier indirect, rather than the direct, evidence, and there was no inconsistency between direct and indirect estimates in the updated evidence. The earlier inconsistency was due to one trial comparing primary pegfilgrastim with no primary G-CSF. Predictive cross-validation showed that this study was inconsistent with the evidence as a whole and with other trials making this comparison. Both the Cochrane Handbook and the NICE Methods Guide express a preference for direct evidence. A more robust strategy, which is in line with the accepted principles of evidence synthesis, would be to combine all relevant and appropriate information, whether direct or indirect. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Prediction of the area affected by earthquake-induced landsliding based on seismological parameters
NASA Astrophysics Data System (ADS)
Marc, Odin; Meunier, Patrick; Hovius, Niels
2017-07-01
We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.
NASA Astrophysics Data System (ADS)
Saito, M.; Iwabuchi, H.; Yang, P.; Tang, G.; King, M. D.; Sekiguchi, M.
2016-12-01
Cirrus clouds cover about 25% of the globe. Knowledge about the optical and microphysical properties of these clouds [particularly, optical thickness (COT) and effective radius (CER)] is essential to radiative forcing assessment. Previous studies of those properties using satellite remote sensing techniques based on observations by passive and active sensors gave inconsistent retrievals. In particular, COTs from the Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) using the unconstrained method are affected by variable particle morphology, especially the fraction of horizontally oriented plate particles (HPLT), because the method assumes the lidar ratio to be constant, which should have different values for different ice particle shapes. More realistic ice particle morphology improves estimates of the optical and microphysical properties. In this study, we develop an optimal estimation-based algorithm to infer cirrus COT and CER in addition to morphological parameters (e.g., Fraction of HPLT) using the observations made by CALIOP and the Infrared Imaging Radiometer (IIR) on the CALIPSO platform. The assumed ice particle model is a mixture of a few habits with variable HPLT. Ice particle single-scattering properties are computed using state-of-the-art light-scattering computational capabilities. Rigorous estimation of uncertainties associated with surface properties, atmospheric gases and cloud heterogeneity is performed. The results based on the present method show that COTs are quite consistent with the MODIS and CALIOP counterparts, and CERs essentially agree with the IIR operational retrievals. The lidar ratio is calculated from the bulk optical properties based on the inferred parameters. The presentation will focus on latitudinal variations of particle morphology and the lidar ratio on a global scale.
Environmental cost of using poor decision metrics to prioritize environmental projects.
Pannell, David J; Gibson, Fiona L
2016-04-01
Conservation decision makers commonly use project-scoring metrics that are inconsistent with theory on optimal ranking of projects. As a result, there may often be a loss of environmental benefits. We estimated the magnitudes of these losses for various metrics that deviate from theory in ways that are common in practice. These metrics included cases where relevant variables were omitted from the benefits metric, project costs were omitted, and benefits were calculated using a faulty functional form. We estimated distributions of parameters from 129 environmental projects from Australia, New Zealand, and Italy for which detailed analyses had been completed previously. The cost of using poor prioritization metrics (in terms of lost environmental values) was often high--up to 80% in the scenarios we examined. The cost in percentage terms was greater when the budget was smaller. The most costly errors were omitting information about environmental values (up to 31% loss of environmental values), omitting project costs (up to 35% loss), omitting the effectiveness of management actions (up to 9% loss), and using a weighted-additive decision metric for variables that should be multiplied (up to 23% loss). The latter 3 are errors that occur commonly in real-world decision metrics, in combination often reducing potential benefits from conservation investments by 30-50%. Uncertainty about parameter values also reduced the benefits from investments in conservation projects but often not by as much as faulty prioritization metrics. © 2016 Society for Conservation Biology.
Baseline estimation from simultaneous satellite laser tracking
NASA Technical Reports Server (NTRS)
Dedes, George C.
1987-01-01
Simultaneous Range Differences (SRDs) to Lageos are obtained by dividing the observing stations into pairs with quasi-simultaneous observations. For each of those pairs the station with the least number of observations is identified, and at its observing epochs interpolated ranges for the alternate station are generated. The SRD observables are obtained by subtracting the actually observed laser range of the station having the least number of observations from the interpolated ranges of the alternate station. On the basis of these observables semidynamic single baseline solutions were performed. The aim of these solutions is to further develop and implement the SRD method in the real data environment, to assess its accuracy, its advantages and disadvantages as related to the range dynamic mode methods, when the baselines are the only parameters of interest. Baselines, using simultaneous laser range observations to Lageos, were also estimated through the purely geometric method. These baselines formed the standards the standards of comparison in the accuracy assessment of the SRD method when compared to that of the range dynamic mode methods. On the basis of this comparison it was concluded that for baselines of regional extent the SRD method is very effective, efficient, and at least as accurate as the range dynamic mode methods, and that on the basis of a simple orbital modeling and a limited orbit adjustment. The SRD method is insensitive to the inconsistencies affecting the terrestrial reference frame and simultaneous adjustment of the Earth Rotation Parameters (ERPs) is not necessary.
ERIC Educational Resources Information Center
Environmental Science and Technology, 1976
1976-01-01
Recent national surveys conducted by the Council on Environmental Quality and others uncovered inconsistencies and confusion in the manner environmental quality parameters were used and reported. A standard air pollution index, comparative guide to water quality indicators and biological monitoring information are being developed. (BT)
Integrated cosmological probes: concordance quantified
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicola, Andrina; Amara, Adam; Refregier, Alexandre, E-mail: andrina.nicola@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch
2017-10-01
Assessing the consistency of parameter constraints derived from different cosmological probes is an important way to test the validity of the underlying cosmological model. In an earlier work [1], we computed constraints on cosmological parameters for ΛCDM from an integrated analysis of CMB temperature anisotropies and CMB lensing from Planck, galaxy clustering and weak lensing from SDSS, weak lensing from DES SV as well as Type Ia supernovae and Hubble parameter measurements. In this work, we extend this analysis and quantify the concordance between the derived constraints and those derived by the Planck Collaboration as well as WMAP9, SPT andmore » ACT. As a measure for consistency, we use the Surprise statistic [2], which is based on the relative entropy. In the framework of a flat ΛCDM cosmological model, we find all data sets to be consistent with one another at a level of less than 1σ. We highlight that the relative entropy is sensitive to inconsistencies in the models that are used in different parts of the analysis. In particular, inconsistent assumptions for the neutrino mass break its invariance on the parameter choice. When consistent model assumptions are used, the data sets considered in this work all agree with each other and ΛCDM, without evidence for tensions.« less
Rosenbaum, Janet E
2009-06-01
Surveys are the primary information source about adolescents' health risk behaviors, but adolescents may not report their behaviors accurately. Survey data are used for formulating adolescent health policy, and inaccurate data can cause mistakes in policy creation and evaluation. The author used test-retest data from the Youth Risk Behavior Survey (United States, 2000) to compare adolescents' responses to 72 questions about their risk behaviors at a 2-week interval. Each question was evaluated for prevalence change and 3 measures of unreliability: inconsistency (retraction and apparent initiation), agreement measured as tetrachoric correlation, and estimated error due to inconsistency assessed with a Bayesian method. Results showed that adolescents report their sex, drug, alcohol, and tobacco histories more consistently than other risk behaviors in a 2-week period, opposite their tendency over longer intervals. Compared with other Youth Risk Behavior Survey topics, most sex, drug, alcohol, and tobacco items had stable prevalence estimates, higher average agreement, and lower estimated measurement error. Adolescents reported their weight control behaviors more unreliably than other behaviors, particularly problematic because of the increased investment in adolescent obesity research and reliance on annual surveys for surveillance and policy evaluation. Most weight control items had unstable prevalence estimates, lower average agreement, and greater estimated measurement error than other topics.
Accelerating deep neural network training with inconsistent stochastic gradient descent.
Wang, Linnan; Yang, Yi; Min, Renqiang; Chakradhar, Srimat
2017-09-01
Stochastic Gradient Descent (SGD) updates Convolutional Neural Network (CNN) with a noisy gradient computed from a random batch, and each batch evenly updates the network once in an epoch. This model applies the same training effort to each batch, but it overlooks the fact that the gradient variance, induced by Sampling Bias and Intrinsic Image Difference, renders different training dynamics on batches. In this paper, we develop a new training strategy for SGD, referred to as Inconsistent Stochastic Gradient Descent (ISGD) to address this problem. The core concept of ISGD is the inconsistent training, which dynamically adjusts the training effort w.r.t the loss. ISGD models the training as a stochastic process that gradually reduces down the mean of batch's loss, and it utilizes a dynamic upper control limit to identify a large loss batch on the fly. ISGD stays on the identified batch to accelerate the training with additional gradient updates, and it also has a constraint to penalize drastic parameter changes. ISGD is straightforward, computationally efficient and without requiring auxiliary memories. A series of empirical evaluations on real world datasets and networks demonstrate the promising performance of inconsistent training. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analyzing crash frequency in freeway tunnels: A correlated random parameters approach.
Hou, Qinzhong; Tarko, Andrew P; Meng, Xianghai
2018-02-01
The majority of past road safety studies focused on open road segments while only a few focused on tunnels. Moreover, the past tunnel studies produced some inconsistent results about the safety effects of the traffic patterns, the tunnel design, and the pavement conditions. The effects of these conditions therefore remain unknown, especially for freeway tunnels in China. The study presented in this paper investigated the safety effects of these various factors utilizing a four-year period (2009-2012) of data as well as three models: 1) a random effects negative binomial model (RENB), 2) an uncorrelated random parameters negative binomial model (URPNB), and 3) a correlated random parameters negative binomial model (CRPNB). Of these three, the results showed that the CRPNB model provided better goodness-of-fit and offered more insights into the factors that contribute to tunnel safety. The CRPNB was not only able to allocate the part of the otherwise unobserved heterogeneity to the individual model parameters but also was able to estimate the cross-correlations between these parameters. Furthermore, the study results showed that traffic volume, tunnel length, proportion of heavy trucks, curvature, and pavement rutting were associated with higher frequencies of traffic crashes, while the distance to the tunnel wall, distance to the adjacent tunnel, distress ratio, International Roughness Index (IRI), and friction coefficient were associated with lower crash frequencies. In addition, the effects of the heterogeneity of the proportion of heavy trucks, the curvature, the rutting depth, and the friction coefficient were identified and their inter-correlations were analyzed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Box-Cox Mixed Logit Model for Travel Behavior Analysis
NASA Astrophysics Data System (ADS)
Orro, Alfonso; Novales, Margarita; Benitez, Francisco G.
2010-09-01
To represent the behavior of travelers when they are deciding how they are going to get to their destination, discrete choice models, based on the random utility theory, have become one of the most widely used tools. The field in which these models were developed was halfway between econometrics and transport engineering, although the latter now constitutes one of their principal areas of application. In the transport field, they have mainly been applied to mode choice, but also to the selection of destination, route, and other important decisions such as the vehicle ownership. In usual practice, the most frequently employed discrete choice models implement a fixed coefficient utility function that is linear in the parameters. The principal aim of this paper is to present the viability of specifying utility functions with random coefficients that are nonlinear in the parameters, in applications of discrete choice models to transport. Nonlinear specifications in the parameters were present in discrete choice theory at its outset, although they have seldom been used in practice until recently. The specification of random coefficients, however, began with the probit and the hedonic models in the 1970s, and, after a period of apparent little practical interest, has burgeoned into a field of intense activity in recent years with the new generation of mixed logit models. In this communication, we present a Box-Cox mixed logit model, original of the authors. It includes the estimation of the Box-Cox exponents in addition to the parameters of the random coefficients distribution. Probability of choose an alternative is an integral that will be calculated by simulation. The estimation of the model is carried out by maximizing the simulated log-likelihood of a sample of observed individual choices between alternatives. The differences between the predictions yielded by models that are inconsistent with real behavior have been studied with simulation experiments.
The Educational Consequences of Teen Childbearing
Kane, Jennifer B.; Morgan, S. Philip; Harris, Kathleen Mullan; Guilkey, David K.
2013-01-01
A huge literature shows that teen mothers face a variety of detriments across the life course, including truncated educational attainment. To what extent is this association causal? The estimated effects of teen motherhood on schooling vary widely, ranging from no discernible difference to 2.6 fewer years among teen mothers. The magnitude of educational consequences is therefore uncertain, despite voluminous policy and prevention efforts that rest on the assumption of a negative and presumably causal effect. This study adjudicates between two potential sources of inconsistency in the literature—methodological differences or cohort differences—by using a single, high-quality data source: namely, The National Longitudinal Study of Adolescent Health. We replicate analyses across four different statistical strategies: ordinary least squares regression; propensity score matching; and parametric and semiparametric maximum likelihood estimation. Results demonstrate educational consequences of teen childbearing, with estimated effects between 0.7 and 1.9 fewer years of schooling among teen mothers. We select our preferred estimate (0.7), derived from semiparametric maximum likelihood estimation, on the basis of weighing the strengths and limitations of each approach. Based on the range of estimated effects observed in our study, we speculate that variable statistical methods are the likely source of inconsistency in the past. We conclude by discussing implications for future research and policy, and recommend that future studies employ a similar multimethod approach to evaluate findings. PMID:24078155
Rainfall recharge estimation on a nation-wide scale using satellite information in New Zealand
NASA Astrophysics Data System (ADS)
Westerhoff, Rogier; White, Paul; Moore, Catherine
2015-04-01
Models of rainfall recharge to groundwater are challenged by the need to combine uncertain estimates of rainfall, evapotranspiration, terrain slope, and unsaturated zone parameters (e.g., soil drainage and hydraulic conductivity of the subsurface). Therefore, rainfall recharge is easiest to estimate on a local scale in well-drained plains, where it is known that rainfall directly recharges groundwater. In New Zealand, this simplified approach works in the policy framework of regional councils, who manage water allocation at the aquifer and sub-catchment scales. However, a consistent overview of rainfall recharge is difficult to obtain at catchment and national scale: in addition to data uncertainties, data formats are inconsistent between catchments; the density of ground observations, where these exist, differs across regions; each region typically uses different local models for estimating recharge components; and different methods and ground observations are used for calibration and validation of these models. The research described in this paper therefore presents a nation-wide approach to estimate rainfall recharge in New Zealand. The method used is a soil water balance approach, with input data from national rainfall and soil and geology databases. Satellite data (i.e., evapotranspiration, soil moisture, and terrain) aid in the improved calculation of rainfall recharge, especially in data-sparse areas. A first version of the model has been implemented on a 1 km x 1 km and monthly scale between 2000 and 2013. A further version will include a quantification of recharge estimate uncertainty: with both "top down" input error propagation methods and catchment-wide "bottom up" assessments of integrated uncertainty being adopted. Using one nation-wide methodology opens up new possibilities: it can, for example, help in more consistent estimation of water budgets, groundwater fluxes, or other hydrological parameters. Since recharge is estimated for the entire land surface, and not only the known aquifers, the model also identifies other zones that could potentially recharge aquifers, including large areas (e.g., mountains) that are currently regarded as impervious. The resulting rainfall recharge data have also been downscaled in a 200 m x 200 m calculation of a national monthly water table. This will lead to better estimation of hydraulic conductivity, which holds considerable potential for further research in unconfined aquifers in New Zealand.
ABALUCK, JASON
2017-01-01
We explore the in- and out- of sample robustness of tests for choice inconsistencies based on parameter restrictions in parametric models, focusing on tests proposed by Ketcham, Kuminoff and Powers (KKP). We argue that their non-parametric alternatives are inherently conservative with respect to detecting mistakes. We then show that our parametric model is robust to KKP’s suggested specification checks, and that comprehensive goodness of fit measures perform better with our model than the expected utility model. Finally, we explore the robustness of our 2011 results to alternative normative assumptions highlighting the role of brand fixed effects and unobservable characteristics. PMID:29170561
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A.
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs—with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the “oracle” choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance. PMID:29780302
McKinney, Christy M; Harris, T Robert; Caetano, Raul
2009-01-01
Little is known about the reliability of self-reported child physical abuse (CPA) or CPA reporting practices. We estimated reliability and prevalence of self-reported CPA and identified factors predictive of inconsistent CPA reporting among 2,256 participants using surveys administered in 1995 and 2000. Reliability of CPA was fair to moderate (kappa = 0.41). Using a positive report from either survey, the prevalence of moderate (61.8%) and severe (12.0%) CPA was higher than at either survey alone. Compared to consistent reporters of having experienced CPA, inconsistent reporters were less likely to be > or = 30 years old (vs. 18-29) or Black (vs. White) and more likely to have < 12 years of education (vs. 12), have no alcohol-related problems (vs. having problems), or report one type (vs. > or = 2) of CPA. These findings may assist researchers conducting and interpreting studies of CPA.
NASA Technical Reports Server (NTRS)
Fields, J. M.
1984-01-01
Even though there are surveys in which annoyance decreases as the number of events increases above about 150 a day, the available evidence is not considered strong enough to reject the conventional assumption that reactions are related to the logarithm of the number of events. The data do not make it possible to reject the conventional assumption that the effects of the number of events and the peak noise level are additive. It is found that even when equivalent questionnaire items and definitions of noise events could be used, differences between the surveys' estimates of the effect of the number of events remained large. Three explanations are suggested for inconsistent estimates. The first has to do with errors in specifying the values of noise parameters, the second with the effects of unmeasured acoustical and area characteristics that are correlated with noise level or number, and the third with large sampling errors deriving from community differences in response to noise. It is concluded that significant advances in the knowledge about the effects of the number of noise events can be made only if surveys include large numbers of study areas.
Sampling in ecology and evolution - bridging the gap between theory and practice
Albert, C.H.; Yoccoz, N.G.; Edwards, T.C.; Graham, C.H.; Zimmermann, N.E.; Thuiller, W.
2010-01-01
Sampling is a key issue for answering most ecological and evolutionary questions. The importance of developing a rigorous sampling design tailored to specific questions has already been discussed in the ecological and sampling literature and has provided useful tools and recommendations to sample and analyse ecological data. However, sampling issues are often difficult to overcome in ecological studies due to apparent inconsistencies between theory and practice, often leading to the implementation of simplified sampling designs that suffer from unknown biases. Moreover, we believe that classical sampling principles which are based on estimation of means and variances are insufficient to fully address many ecological questions that rely on estimating relationships between a response and a set of predictor variables over time and space. Our objective is thus to highlight the importance of selecting an appropriate sampling space and an appropriate sampling design. We also emphasize the importance of using prior knowledge of the study system to estimate models or complex parameters and thus better understand ecological patterns and processes generating these patterns. Using a semi-virtual simulation study as an illustration we reveal how the selection of the space (e.g. geographic, climatic), in which the sampling is designed, influences the patterns that can be ultimately detected. We also demonstrate the inefficiency of common sampling designs to reveal response curves between ecological variables and climatic gradients. Further, we show that response-surface methodology, which has rarely been used in ecology, is much more efficient than more traditional methods. Finally, we discuss the use of prior knowledge, simulation studies and model-based designs in defining appropriate sampling designs. We conclude by a call for development of methods to unbiasedly estimate nonlinear ecologically relevant parameters, in order to make inferences while fulfilling requirements of both sampling theory and field work logistics. ?? 2010 The Authors.
Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F
2011-04-01
To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.
Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F.
2011-01-01
Purpose To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Materials and Methods Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in-vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Results Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. Conclusion The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast 3D MRI data acquisition. PMID:21448967
Observed and Projected Changes to the Precipitation Annual Cycle
Marvel, Kate; Biasutti, Michela; Bonfils, Celine; ...
2017-06-08
Anthropogenic climate change is predicted to cause spatial and temporal shifts in precipitation patterns. These may be apparent in changes to the annual cycle of zonal mean precipitation P. Trends in the amplitude and phase of the P annual cycle in two long-term, global satellite datasets are broadly similar. Model-derived fingerprints of externally forced changes to the amplitude and phase of the P seasonal cycle, combined with these observations, enable a formal detection and attribution analysis. Observed amplitude changes are inconsistent with model estimates of internal variability but not attributable to the model-predicted response to external forcing. This mismatch betweenmore » observed and predicted amplitude changes is consistent with the sustained La Niña–like conditions that characterize the recent slowdown in the rise of the global mean temperature. However, observed changes to the annual cycle phase do not seem to be driven by this recent hiatus. Furthermore these changes are consistent with model estimates of forced changes, are inconsistent (in one observational dataset) with estimates of internal variability, and may suggest the emergence of an externally forced signal.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marvel, Kate; Biasutti, Michela; Bonfils, Celine
Anthropogenic climate change is predicted to cause spatial and temporal shifts in precipitation patterns. These may be apparent in changes to the annual cycle of zonal mean precipitation P. Trends in the amplitude and phase of the P annual cycle in two long-term, global satellite datasets are broadly similar. Model-derived fingerprints of externally forced changes to the amplitude and phase of the P seasonal cycle, combined with these observations, enable a formal detection and attribution analysis. Observed amplitude changes are inconsistent with model estimates of internal variability but not attributable to the model-predicted response to external forcing. This mismatch betweenmore » observed and predicted amplitude changes is consistent with the sustained La Niña–like conditions that characterize the recent slowdown in the rise of the global mean temperature. However, observed changes to the annual cycle phase do not seem to be driven by this recent hiatus. Furthermore these changes are consistent with model estimates of forced changes, are inconsistent (in one observational dataset) with estimates of internal variability, and may suggest the emergence of an externally forced signal.« less
Kydd, Robyn M.; Connor, Jennie
2015-01-01
Aims: To describe inconsistencies in reporting past-year drinking status and heavy drinking occasions (HDOs) on single questions from two different instruments, and to identify associated characteristics and impacts. Methods: We compared computer-presented Alcohol Use Disorder Identification Test-Consumption (AUDIT-C) with categorical response options, and mental health interview (MHI) with open-ended consumption questions, completed on the same day. Participants were 464 men and 459 women aged 38 (91.7% of surviving birth cohort members). Differences in dichotomous single-item measures of abstention and HDO frequency, associations of inconsistent reporting with sex, socioeconomic status (SES) and survey order, and impacts of instrument choice on associations of alcohol with sex and SES were examined. Results: The AUDIT-C drinking frequency question estimated higher past-year abstention prevalence (AUDIT = 7.6%, MHI = 5.4%), with one-third of AUDIT-C abstainers being MHI drinkers. Only AUDIT-C produced significant sex differences in abstainer prevalence. Inconsistencies in HDO classifications were bidirectional, but with fewer HDOs reported on the MHI than AUDIT-C question. Lower SES was associated with inconsistency in abstention and weekly+ HDOs. Abstention and higher HDO frequency were associated with lower SES overall, but sex-specific associations differed by instrument. Conclusions: In this context, data collection method affected findings, with inconsistencies in abstention reports having most impact. Future studies should: (a) confirm self-reported abstention; (b) consider piloting data collection methods in target populations; (c) expect impacts of sex and SES on measurements and analyses. PMID:25648932
Kosmulski, Marek
2012-01-01
The numerical values of points of zero charge (PZC, obtained by potentiometric titration) and of isoelectric points (IEP) of various materials reported in the literature have been analyzed. In sets of results reported for the same chemical compound (corresponding to certain chemical formula and crystallographic structure), the IEP are relatively consistent. In contrast, in materials other than metal oxides, the sets of PZC are inconsistent. In view of the inconsistence in the sets of PZC and of the discrepancies between PZC and IEP reported for the same material, it seems that IEP is more suitable than PZC as the unique number characterizing the pH-dependent surface charging of materials other than metal oxides. The present approach is opposite to the usual approach, in which the PZC and IEP are considered as two equally important parameters characterizing the pH-dependent surface charging of materials other than metal oxides. Copyright © 2012 Elsevier B.V. All rights reserved.
The etiology of social aggression: a nuclear twin family study.
Slawinski, Brooke L; Klump, Kelly L; Burt, S Alexandra
2018-04-02
Social aggression is a form of antisocial behavior in which social relationships and social status are used to damage reputations and inflict emotional harm on others. Despite extensive research examining the prevalence and consequences of social aggression, only a few studies have examined its genetic-environmental etiology, with markedly inconsistent results. We estimated the etiology of social aggression using the nuclear twin family (NTF) model. Maternal-report, paternal-report, and teacher-report data were collected for twin social aggression (N = 1030 pairs). We also examined the data using the classical twin (CT) model to evaluate whether its strict assumptions may have biased previous heritability estimates. The best-fitting NTF model for all informants was the ASFE model, indicating that additive genetic, sibling environmental, familial environmental, and non-shared environmental influences significantly contribute to the etiology of social aggression in middle childhood. However, the best-fitting CT model varied across informants, ranging from AE and ACE to CE. Specific heritability estimates for both NTF and CT models also varied across informants such that teacher reports indicated greater genetic influences and father reports indicated greater shared environmental influences. Although the specific NTF parameter estimates varied across informants, social aggression generally emerged as largely additive genetic (A = 0.15-0.77) and sibling environmental (S = 0.42-0.72) in origin. Such findings not only highlight an important role for individual genetic risk in the etiology of social aggression, but also raise important questions regarding the role of the environment.
Assessment of Students with Emotional and Behavioral Disorders
ERIC Educational Resources Information Center
Plotts, Cynthia A.
2012-01-01
Assessment and identification of children with emotional and behavioral disorders (EBD) is complex and involves multiple techniques, levels, and participants. While federal law sets the general parameters for identification in school settings, these criteria are vague and may lead to inconsistencies in selection and interpretation of assessment…
Evolving Choice Inconsistencies in Choice of Prescription Drug Insurance
ABALUCK, JASON
2017-01-01
We study choice over prescription insurance plans by the elderly using government administrative data to evaluate how these choices evolve over time. We find large “foregone savings” from not choosing the lowest cost plan that has grown over time. We develop a structural framework to decompose the changes in “foregone welfare” from inconsistent choices into choice set changes and choice function changes from a fixed choice set. We find that foregone welfare increases over time due primarily to changes in plan characteristics such as premiums and out-of-pocket costs; we estimate little learning at either the individual or cohort level. PMID:29104294
Multi-model ensemble estimation of volume transport through the straits of the East/Japan Sea
NASA Astrophysics Data System (ADS)
Han, Sooyeon; Hirose, Naoki; Usui, Norihisa; Miyazawa, Yasumasa
2016-01-01
The volume transports measured at the Korea/Tsushima, Tsugaru, and Soya/La Perouse Straits remain quantitatively inconsistent. However, data assimilation models at least provide a self-consistent budget despite subtle differences among the models. This study examined the seasonal variation of the volume transport using the multiple linear regression and ridge regression of multi-model ensemble (MME) methods to estimate more accurately transport at these straits by using four different data assimilation models. The MME outperformed all of the single models by reducing uncertainties, especially the multicollinearity problem with the ridge regression. However, the regression constants turned out to be inconsistent with each other if the MME was applied separately for each strait. The MME for a connected system was thus performed to find common constants for these straits. The estimation of this MME was found to be similar to the MME result of sea level difference (SLD). The estimated mean transport (2.43 Sv) was smaller than the measurement data at the Korea/Tsushima Strait, but the calibrated transport of the Tsugaru Strait (1.63 Sv) was larger than the observed data. The MME results of transport and SLD also suggested that the standard deviation (STD) of the Korea/Tsushima Strait is larger than the STD of the observation, whereas the estimated results were almost identical to that observed for the Tsugaru and Soya/La Perouse Straits. The similarity between MME results enhances the reliability of the present MME estimation.
NASA Astrophysics Data System (ADS)
Hashim, S.; Karim, M. K. A.; Bakar, K. A.; Sabarudin, A.; Chin, A. W.; Saripan, M. I.; Bradley, D. A.
2016-09-01
The magnitude of radiation dose in computed tomography (CT) depends on the scan acquisition parameters, investigated herein using an anthropomorphic phantom (RANDO®) and thermoluminescence dosimeters (TLD). Specific interest was in the organ doses resulting from CT thorax examination, the specific k coefficient for effective dose estimation for particular protocols also being determined. For measurement of doses representing five main organs (thyroid, lung, liver, esophagus and skin), TLD-100 (LiF:Mg, Ti) were inserted into selected holes in a phantom slab. Five CT thorax protocols were investigated, one routine (R1) and four that were modified protocols (R2 to R5). Organ doses were ranked from greatest to least, found to lie in the order: thyroid>skin>lung>liver>breast. The greatest dose, for thyroid at 25 mGy, was that in use of R1 while the lowest, at 8.8 mGy, was in breast tissue using R3. Effective dose (E) was estimated using three standard methods: the International Commission on Radiological Protection (ICRP)-103 recommendation (E103), the computational phantom CT-EXPO (E(CTEXPO)) method, and the dose-length product (DLP) based approach. E103 k factors were constant for all protocols, 8% less than that of the universal k factor. Due to inconsistency in tube potential and pitch factor the k factors from CTEXPO were found to vary between 0.015 and 0.010 for protocols R3 and R5. With considerable variation between scan acquisition parameters and organ doses, optimization of practice is necessary in order to reduce patient organ dose.
Sou, Julie; Shannon, Kate; Li, Jane; Nguyen, Paul; Strathdee, Steffanie; Shoveller, Jean; Goldenberg, Shira M.
2015-01-01
Background Migrant women in sex work experience unique risks and protective factors related to their sexual health. Given the dearth of knowledge in high-income countries, we explored factors associated with inconsistent condom use by clients among migrant female sex workers over time in Vancouver, BC. Methods Questionnaire and HIV/STI testing data from a longitudinal cohort, AESHA, were collected from 2010–2013. Logistic regression using generalized estimating equations (GEE) was used to model correlates of inconsistent condom use by clients among international migrant sex workers over a 3-year study period. Results Of 685 participants, analyses were restricted to 182 (27%) international migrants who primarily originated from China. In multivariate GEE analyses, difficulty accessing condoms (Adjusted Odds Ratio (AOR) 3.76, 95% Confidence Interval (CI) 1.13–12.47) independently correlated with increased odds of inconsistent condom use by clients. Servicing clients in indoor sex work establishments (e.g., massage parlours) (AOR 0.34, 95% CI 0.15–0.77), and high school attainment (AOR 0.22, 95% CI 0.09–0.50) had independent protective effects on the odds of inconsistent condom use by clients. Conclusions Findings of this longitudinal study highlight the persistent challenges faced by migrant sex workers in terms of accessing and using condoms. Migrant sex workers who experienced difficulty in accessing condoms were more than three times as likely to report inconsistent condom use by clients. Laws, policies and programs promoting access to safer, decriminalized indoor work environments remain urgently needed to promote health, safety and human rights for migrant workers in the sex industry. PMID:25970307
NASA Astrophysics Data System (ADS)
Li, M.; Huang, X.; Li, J.; Song, Y.
2012-03-01
Because of the high emission rate and reactivity, biogenic volatile organic compounds (BVOCs) play a significant role in the terrestrial ecosystems, human health, secondary pollution, global climate change and the global carbon cycle. Past estimations of BVOC emissions in China were based on outdated algorithms and coarsely resolved meteorological data, and there have been significant inconsistences between the land surface parameters of dynamic models and those of BVOC estimation models, leading to large inaccuracies in the estimated results. To refine BVOC emission estimations for China and to further explore the role of BVOCs in the atmosphere, we used the latest algorithms of MEGAN (Model of Emissions of Gases and Aerosols from Nature), with MM5 (the Fifth-Generation Mesoscale Model) providing highly resolved meteorological data, to estimate the biogenic emissions of isoprene (C5H8) and seven monoterpene species (C10H16) in 2006. Real-time MODIS (Moderate Resolution Imaging Spectroradiometer) data were introduced to update the land surface parameters and to improve the simulation performance of MM5, and to determine the influence of leaf area index (LAI) and leaf age deviation from standard conditions. In this study, the annual BVOC emissions for the whole country totaled 12.97 Tg C, a relevant value compared with past studies. Therein, the most important individual contributor was isoprene (9.36 Tg C yr-1), followed by α-pinene (1.24 Tg C yr-1) and β-pinene (0.84 Tg C yr-1). Due to the considerable regional disparity in plant distributions and meteorological conditions across China, BVOC emissions presented significant spatial and temporal variations. Spatially, isoprene emission was concentrated in South China, which is covered by large areas of broadleaf forests and shrubs. While Southeast China was the top-ranking contributor of monoterpenes, in which the dominant vegetation genera consist of evergreen coniferous forests. Temporally, BVOC emissions primarily occurred in July and August, with daily emissions peaking at about 13:00∼14:00 h (Beijing Time, BJT). In this study, we present an improved estimation of BVOC emissions, which provides important information for further exploration of the role of BVOCs in atmospheric processes.
Subashi, Ergys; Choudhury, Kingshuk R; Johnson, G Allan
2014-03-01
The pharmacokinetic parameters derived from dynamic contrast-enhanced (DCE) MRI have been used in more than 100 phase I trials and investigator led studies. A comparison of the absolute values of these quantities requires an estimation of their respective probability distribution function (PDF). The statistical variation of the DCE-MRI measurement is analyzed by considering the fundamental sources of error in the MR signal intensity acquired with the spoiled gradient-echo (SPGR) pulse sequence. The variance in the SPGR signal intensity arises from quadrature detection and excitation flip angle inconsistency. The noise power was measured in 11 phantoms of contrast agent concentration in the range [0-1] mM (in steps of 0.1 mM) and in onein vivo acquisition of a tumor-bearing mouse. The distribution of the flip angle was determined in a uniform 10 mM CuSO4 phantom using the spin echo double angle method. The PDF of a wide range of T1 values measured with the varying flip angle (VFA) technique was estimated through numerical simulations of the SPGR equation. The resultant uncertainty in contrast agent concentration was incorporated in the most common model of tracer exchange kinetics and the PDF of the derived pharmacokinetic parameters was studied numerically. The VFA method is an unbiased technique for measuringT1 only in the absence of bias in excitation flip angle. The time-dependent concentration of the contrast agent measured in vivo is within the theoretically predicted uncertainty. The uncertainty in measuring K(trans) with SPGR pulse sequences is of the same order, but always higher than, the uncertainty in measuring the pre-injection longitudinal relaxation time (T10). The lowest achievable bias/uncertainty in estimating this parameter is approximately 20%-70% higher than the bias/uncertainty in the measurement of the pre-injection T1 map. The fractional volume parameters derived from the extended Tofts model were found to be extremely sensitive to the variance in signal intensity. The SNR of the pre-injection T1 map indicates the limiting precision with which K(trans) can be calculated. Current small-animal imaging systems and pulse sequences robust to motion artifacts have the capacity for reproducible quantitative acquisitions with DCE-MRI. In these circumstances, it is feasible to achieve a level of precision limited only by physiologic variability.
Primer Stepper Motor Nomenclature, Definition, Performance and Recommended Test Methods
NASA Technical Reports Server (NTRS)
Starin, Scott; Shea, Cutter
2014-01-01
There has been an unfortunate lack of standardization of the terms and components of stepper motor performance, requirements definition, application of torque margin and implementation of test methods. This paper will address these inconsistencies and discuss in detail the implications of performance parameters, affects of load inertia, control electronics, operational resonances and recommended test methods. Additionally, this paper will recommend parameters for defining and specifying stepper motor actuators. A useful description of terms as well as consolidated equations and recommended requirements is included.
The effects of short- and long-term air pollutants on plant phenology and leaf characteristics.
Jochner, Susanne; Markevych, Iana; Beck, Isabelle; Traidl-Hoffmann, Claudia; Heinrich, Joachim; Menzel, Annette
2015-11-01
Pollution adversely affects vegetation; however, its impact on phenology and leaf morphology is not satisfactorily understood yet. We analyzed associations between pollutants and phenological data of birch, hazel and horse chestnut in Munich (2010) along with the suitability of leaf morphological parameters of birch for monitoring air pollution using two datasets: cumulated atmospheric concentrations of nitrogen dioxide and ozone derived from passive sampling (short-term exposure) and pollutant information derived from Land Use Regression models (long-term exposure). Partial correlations and stepwise regressions revealed that increased ozone (birch, horse chestnut), NO2, NOx and PM levels (hazel) were significantly related to delays in phenology. Correlations were especially high when rural sites were excluded suggesting a better estimation of long-term within-city pollution. In situ measurements of foliar characteristics of birch were not suitable for bio-monitoring pollution. Inconsistencies between long- and short-term exposure effects suggest some caution when interpreting short-term data collected within field studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
Using Data Linkage to Investigate Inconsistent Reporting of Self-Harm and Questionnaire Non-Response
Mars, Becky; Cornish, Rosie; Heron, Jon; Boyd, Andy; Crane, Catherine; Hawton, Keith; Lewis, Glyn; Tilling, Kate; Macleod, John; Gunnell, David
2016-01-01
The objective of this study was to examine agreement between self-reported and medically recorded self-harm, and investigate whether the prevalence of self-harm differs in questionnaire responders vs. non-responders. A total of 4,810 participants from the Avon Longitudinal Study of Parents and Children (ALSPAC) completed a self-harm questionnaire at age 16 years. Data from consenting participants were linked to medical records (number available for analyses ranges from 205–3,027). The prevalence of self-harm leading to hospital admission was somewhat higher in questionnaire non-responders than responders (2.0 vs. 1.2%). Hospital attendance with self-harm was under-reported on the questionnaire. One third reported self-harm inconsistently over time; inconsistent reporters were less likely to have depression and fewer had self-harmed with suicidal intent. Self-harm prevalence estimates derived from self-report may be underestimated; more accurate figures may come from combining data from multiple sources. PMID:26789257
Human microdose evaluation of the novel EP1 receptor antagonist GSK269984A
Ostenfeld, Thor; Beaumont, Claire; Bullman, Jonathan; Beaumont, Maria; Jeffrey, Phillip
2012-01-01
AIM The primary objective was to evaluate the pharmacokinetics (PK) of the novel EP1 antagonist GSK269984A in human volunteers after a single oral and intravenous (i.v.) microdose (100 µg). METHOD GSK269984A was administered to two groups of healthy human volunteers as a single oral (n= 5) or i.v. (n= 5) microdose (100 µg). Blood samples were collected for up to 24 h and the parent drug concentrations were measured in separated plasma using a validated high pressure liquid chromatography-tandem mass spectrometry method following solid phase extraction. RESULTS Following the i.v. microdose, the geometric mean values for clearance (CL), steady-state volume of distribution (Vss) and terminal elimination half-life (t1/2) of GSK269984A were 9.8 l h−1, 62.8 l and 8.2 h. Cmax and AUC(0,∞) were 3.2 ng ml−1 and 10.2 ng ml−1 h, respectively; the corresponding oral parameters were 1.8 ng ml−1 and 9.8 ng ml−1 h, respectively. Absolute oral bioavailability was estimated to be 95%. These data were inconsistent with predictions of human PK based on allometric scaling of in vivo PK data from three pre-clinical species (rat, dog and monkey). CONCLUSION For drug development programmes characterized by inconsistencies between pre-clinical in vitro metabolic and in vivo PK data, and where uncertainty exists with respect to allometric predictions of the human PK profile, these data support the early application of a human microdose study to facilitate the selection of compounds for further clinical development. PMID:22497298
Comparing Measures of Estuarine Ecosystem Production in a ...
Anthropogenic nutrient enrichments and concerted efforts at nutrient reductions, compounded with the influences of climate change, are likely changing the net ecosystem production (NEP) of our coastal systems. To quantify these changes, scientists monitor a range of physical, chemical, and biological parameters sampled at various frequencies. Water column chlorophyll concentrations are arguably the most commonly used indicator of net phytoplankton production, as well as a coarse indicator of NEP. We compared parameters that estimate production, including chlorophyll, across an experimental nutrient gradient and in situ in both well-mixed and stratified estuarine environments. Data from an experiment conducted in the early 1980s in mesocosms designed to replicate a well-mixed mid-Narragansett Bay (Rhode Island) water column were used to correlate changes in chlorophyll concentrations, pH, dissolved oxygen (O2), dissolved inorganic nitrogen, phosphate, and silicate concentrations, cell counts, and 14C carbon uptake measurements across a range of nutrient enrichments. The pH, O2, nutrient, and cell count measurements reflected seasonal cycles of spring blooms followed by late summer/early fall respiration periods across nutrient enrichments. Chlorophyll concentrations were more variable and rates of 14C productivity were inconsistent with observed trends in nutrient concentrations, pH, and O2 concentrations. Similar comparisons were made using data from a well-mixe
NASA Astrophysics Data System (ADS)
Li, M.; Huang, X.; Li, J.; Song, Y.
2012-04-01
Because of the high emission intensity and reactivity, biogenic volatile organic compounds (BVOCs) play a significant role in the terrestrial ecosystems, human health, secondary pollution, global climate change and the global carbon cycle. Past estimations of BVOC emissions in China were based on outdated algorithms and limited meteorological data, and there have been significant inconsistences between the land surface parameters of dynamic models and those of BVOC estimation models, leading to large inaccuracies in the estimated results. To refine BVOC emission estimations for China and to further explore the role of BVOCs in atmospheric chemical processes, we used the latest algorithms of MEGAN (Model of Emissions of Gases and Aerosols from Nature) with MM5 (the Fifth-Generation Mesoscale Model) providing highly resolved meteorological data, to estimate the biogenic emissions of isoprene (C5H8) and seven monoterpene species (C10H16) in 2006. Real-time MODIS (Moderate Resolution Imaging Spectroradiometer) data were introduced to update the land surface parameters and improve the simulation performance of MM5, and to modify the influence of leaf area index (LAI) and leaf age deviation from standard conditions. In this study, the annual BVOC emissions for the whole country totaled 12.97 Tg C, a relevant value much lower than that given in global estimations but higher than the past estimations in China. Therein, the most important individual contributor was isoprene (9.36 Tg C), followed by α-pinene (1.24 Tg C yr-1) and β-pinene (0.84 Tg C yr-1). Due to the considerable regional disparity in plant distributions and meteorological conditions across China, BVOC emissions presented significant spatial-temporal variations. Spatially, isoprene emission was concentrated in South China, which is covered by large areas of broadleaf forests and shrubs. On the other hand, Southeast China was the top-ranking contributor of monoterpenes, in which the dominant vegetation genera consist of evergreen coniferous forests (mainly Pinus massoniana). Temporally, BVOC emissions primarily occurred in July and August during periods of high temperatures, high solar radiation and dense plant cover, with daily emissions peaking at about 13:00~14:00 hours (Beijing Time, BJT) and reaching their lowest values at night. Additionally, emissions of volatile organic compounds (VOCs) of biogenic origin (14.7 Tg yr-1) were approximately one-third less than anthropogenic emissions (23.2 Tg yr-1) and showed distinct spatial distributions. We present a reasonable estimation of BVOC emissions, which provides important information for further exploration of the role of BVOCs in atmospheric processes.
Wang, Yu-Jen; Wang, Yi-Zen; Yeh, Mei-Ling
2016-07-01
Numerous studies have demonstrated autonomic abnormalities in various pain conditions. However, few have investigated heart rate variability (HRV) in young women with primary dysmenorrhea, and the conclusions have been inconsistent. More evidence is required to confirm the reported trend for consistent fluctuation of HRV parameters in dysmenorrhea. The study's aim was to determine whether significant differences exist between young women with and without dysmenorrhea for heart rate (HR), blood pressure (BP), and HRV parameters during menses. A prospective comparison design with repeated measures was used. Sixty-six women aged 18-25 with dysmenorrhea and 54 eumenorrheic women were recruited from a university in northern Taiwan. High-frequency and low-frequency HRV parameters (HF and LF), LF/HF ratio, BP, and HR were measured daily between 8 p.m. and 10 p.m. from Day 1 to Day 6 during menses. The generalized estimating equation was used to analyze the effects of group, time, and Group × Time interaction on these variables. HF values were significantly lower in the dysmenorrhea than in the eumenorrhea group, but there were no differences in BP, HR, LF, or LF/HF ratio. Reduced HF values reflect reduced parasympathetic activity and autonomic instability in young women with dysmenorrhea. Future longitudinal studies are warranted to examine autonomic regulation in menstrual pain of varying intensities associated with dysmenorrhea-related symptoms and to clarify the causal relationship between dysmenorrhea and HRV fluctuations. © The Author(s) 2016.
Meta-analysis of the effect of overexpression of CBF/DREB family genes on drought stress response
USDA-ARS?s Scientific Manuscript database
Transcription factors C-repeat/dehydration-responsive element binding proteins (CBF/DREB) play an important role in plant response to abiotic stresses. Over-expression of various CBF/DREB genes in diverse plants have been reported, but inconsistency of gene donor, recipient genus, parameters used i...
ERIC Educational Resources Information Center
Gunther, Thomas; Konrad, Kerstin; De Brito, Stephane A.; Herpertz-Dahlmann, Beate; Vloet, Timo D.
2011-01-01
Background: Attention-deficit hyperactivity disorder (ADHD) and depressive disorders (DDs) often co-occur in children and adolescents, but evidence on the respective influence of these disorders on attention parameters is inconsistent. This study examines the influence of DDs on ADHD in a model-oriented approach that includes selectivity and…
The issue of cavitation number value in studies of water treatment by hydrodynamic cavitation.
Šarc, Andrej; Stepišnik-Perdih, Tadej; Petkovšek, Martin; Dular, Matevž
2017-01-01
Within the last years there has been a substantial increase in reports of utilization of hydrodynamic cavitation in various applications. It has came to our attention that many times the results are poorly repeatable with the main reason being that the researchers put significant emphasis on the value of the cavitation number when describing the conditions at which their device operates. In the present paper we firstly point to the fact that the cavitation number cannot be used as a single parameter that gives the cavitation condition and that large inconsistencies in the reports exist. Then we show experiments where the influences of the geometry, the flow velocity, the medium temperature and quality on the size, dynamics and aggressiveness of cavitation were assessed. Finally we show that there are significant inconsistencies in the definition of the cavitation number itself. In conclusions we propose a number of parameters, which should accompany any report on the utilization of hydrodynamic cavitation, to make it repeatable and to enable faster progress of science and technology development. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Skumanich, A.; Lites, B. W.
1985-01-01
The least square fitting of Stokes observations of sunspots using a Milne-Eddington-Unno model appears to lead, in many circumstances, to various inconsistencies such as anomalously large doppler widths and, hence, small magnetic fields which are significantly below those inferred solely from the Zeeman splitting in the intensity profile. It is found that the introduction of additional physics into the model such as the inclusion of damping wings and magneto-optic birefrigence significantly improves the fit to Stokes parameters. Model fits excluding the intensity profile, i.e., of both magnitude as well as spectral shape of the polarization parameters alone, suggest that parasitic light in the intensity profile may also be a source of inconsistencies. The consequences of the physical changes on the vector properties of the field derived from the Fe I lambda 6173 line for the 17 November 1975 spot as well as on the thermodynamic state are discussed. A Doppler width delta lambda (D) - 25mA is bound to be consistent with a low spot temperature and microturbulence, and a damping constant of a = 0.2.
The effects of strength training on some parameters of aerobic and anaerobic endurance.
Sentija, Davor; Marsić, Toso; Dizdar, Drazan
2009-03-01
The studies exploring the influence of resistance training on endurance in men have produced inconsistent results. The aim of this study was to examine the influence of an Olympic weight lifting training programme on parameters of aerobic and anaerobic endurance in moderately physically active men. Eleven physical education students (age: 24.1 +/- 1.8 yr, height: 1.77 +/- 0.04 m, body mass: 76.1 +/- 6.4 kg; X +/- SD) underwent a 12-week, 3 times/wk training programme of Olympic weight lifting. Specific exercises to master the lifting technique, and basic exercises for maximal strength and power development were applied, with load intensity and volume defined in relation to individual maximal load (repetitio maximalis, RM). Parameters of both, aerobic and anaerobic endurance were estimated from gas exchange data measured during a single incremental treadmill test to exhaustion, which was performed before, and after completion of the 12-wk programme. After training, there was a small, but significant increase in body mass (75.8 +/- 6.4 vs. 76.6 +/- 6.4, p < 0.05) and peak VO2 (54.9 +/- 5.4 vs. 56.4 +/- 5.3 mL O2/min/kg, p < 0.05), with no significant change of the running speed at the anaerobic threshold (V(AT)) and at exhaustion (V(max)) (both p > 0.05). However, there was a significant increase of anaerobic endurance, estimated from the distance run above V(AT), from V(AT) to V(max), (285 +/- 98 m vs 212 +/- 104 m, p < 0.01). The results of this study indicate that changes in both, anaerobic and aerobic endurance due to a 12-wk period of strength training in untrained persons can be determined from a single incremental treadmill test to exhaustion. The possible causes of those training effects include several possible mechanisms, linked primarily to peripheral adaptation.
Eckermann, Simon; Coory, Michael; Willan, Andrew R
2011-02-01
Economic analysis and assessment of net clinical benefit often requires estimation of absolute risk difference (ARD) for binary outcomes (e.g. survival, response, disease progression) given baseline epidemiological risk in a jurisdiction of interest and trial evidence of treatment effects. Typically, the assumption is made that relative treatment effects are constant across baseline risk, in which case relative risk (RR) or odds ratios (OR) could be applied to estimate ARD. The objective of this article is to establish whether such use of RR or OR allows consistent estimates of ARD. ARD is calculated from alternative framing of effects (e.g. mortality vs survival) applying standard methods for translating evidence with RR and OR. For RR, the RR is applied to baseline risk in the jurisdiction to estimate treatment risk; for OR, the baseline risk is converted to odds, the OR applied and the resulting treatment odds converted back to risk. ARD is shown to be consistently estimated with OR but changes with framing of effects using RR wherever there is a treatment effect and epidemiological risk differs from trial risk. Additionally, in indirect comparisons, ARD is shown to be consistently estimated with OR, while calculation with RR allows inconsistency, with alternative framing of effects in the direction, let alone the extent, of ARD. OR ensures consistent calculation of ARD in translating evidence from trial settings and across trials in direct and indirect comparisons, avoiding inconsistencies from RR with alternative outcome framing and associated biases. These findings are critical for consistently translating evidence to inform economic analysis and assessment of net clinical benefit, as translation of evidence is proposed precisely where the advantages of OR over RR arise.
Satellite Based Soil Moisture Product Validation Using NOAA-CREST Ground and L-Band Observations
NASA Astrophysics Data System (ADS)
Norouzi, H.; Campo, C.; Temimi, M.; Lakhankar, T.; Khanbilvardi, R.
2015-12-01
Soil moisture content is among most important physical parameters in hydrology, climate, and environmental studies. Many microwave-based satellite observations have been utilized to estimate this parameter. The Advanced Microwave Scanning Radiometer 2 (AMSR2) is one of many remotely sensors that collects daily information of land surface soil moisture. However, many factors such as ancillary data and vegetation scattering can affect the signal and the estimation. Therefore, this information needs to be validated against some "ground-truth" observations. NOAA - Cooperative Remote Sensing and Technology (CREST) center at the City University of New York has a site located at Millbrook, NY with several insitu soil moisture probes and an L-Band radiometer similar to Soil Moisture Passive and Active (SMAP) one. This site is among SMAP Cal/Val sites. Soil moisture information was measured at seven different locations from 2012 to 2015. Hydra probes are used to measure six of these locations. This study utilizes the observations from insitu data and the L-Band radiometer close to ground (at 3 meters height) to validate and to compare soil moisture estimates from AMSR2. Analysis of the measurements and AMSR2 indicated a weak correlation with the hydra probes and a moderate correlation with Cosmic-ray Soil Moisture Observing System (COSMOS probes). Several differences including the differences between pixel size and point measurements can cause these discrepancies. Some interpolation techniques are used to expand point measurements from 6 locations to AMSR2 footprint. Finally, the effect of penetration depth in microwave signal and inconsistencies with other ancillary data such as skin temperature is investigated to provide a better understanding in the analysis. The results show that the retrieval algorithm of AMSR2 is appropriate under certain circumstances. This validation algorithm and similar study will be conducted for SMAP mission. Keywords: Remote Sensing, Soil Moisture, AMSR2, SMAP, L-Band.
Soler-Hampejsek, Erica; Grant, Monica J.; Mensch, Barbara S.; Hewett, Paul C.; Rankin, Johanna
2013-01-01
Purpose Reliable data on sexual behavior are needed to identify adolescents at risk of acquiring HIV or other sexually transmitted diseases, as well as unintended pregnancies. This study aims to investigate whether schooling status and literacy and numeracy skills affect adolescents’ reports of premarital sex collected using audio computer-assisted self-interviews (ACASI). Methods Data on 2320 participants in the first three rounds of the Malawi Schooling and Adolescent Study were analyzed to estimate the level of inconsistency in reporting premarital sex among rural Malawian adolescents. Multivariate logistic regressions were used to examine the relationships between school status and academic skills and premarital sexual behavior reports. Results Males were more likely than females to report premarital sex at baseline while females were more likely than males to report sex inconsistently within and across rounds. School-going females and males were more likely to report never having had sex at baseline and to “retract” reports of ever having sex across rounds than their peers who had recently left school. School-going females were also more likely to report sex inconsistently at baseline. Literate and numerate respondents were less likely to report sex inconsistently at baseline; however, they were more likely to retract sex reports across rounds. Conclusions The level of inconsistency both within a survey round and across rounds reflects the difficulties in collecting reliable sexual behavior data from young people in settings such as rural Malawi, where education levels are low, and sex among school-going females is not socially accepted. PMID:23688856
Anxiety Disorders in Childhood: Casting a Nomological Net
ERIC Educational Resources Information Center
Weems, Carl F.; Stickle, Timothy R.
2005-01-01
Empirical research highlights the need for improving the childhood anxiety disorder diagnostic classification system. In particular, inconsistencies in the stability estimates of childhood anxiety disorders and high rates of comorbidity call into the question the utility of the current "DSM" criteria. This paper makes a case for utilizing a…
Numerical visitor capacity: a guide to its use in wilderness
David Cole; Thomas Carlson
2010-01-01
Despite decades of academic work and practical management applications, the concept of visitor capacity remains controversial and inconsistently operationalized. Nevertheless, there are situations where development of a numerical estimate of capacity is important and where not doing so has resulted in land management agencies being successfully litigated. This report...
Correcting for Person Misfit in Aggregated Score Reporting
ERIC Educational Resources Information Center
Brown, Richard S.; Villarreal, Julio C.
2007-01-01
There has been considerable research regarding the extent to which psychometric sound assessments sometimes yield individual score estimates that are inconsistent with the response patterns of the individual. It has been suggested that individual response patterns may differ from expectations for a number of reasons, including subject motivation,…
Does water transport scale universally with tree size?
F.C. Meinzer; B.J. Bond; J.M. Warren; D.R. Woodruff
2005-01-01
1. We employed standardized measurement techniques and protocols to describe the size dependence of whole-tree water use and cross-sectional area of conducting xylem (sapwood) among several species of angiosperms and conifers. 2. The results were not inconsistent with previously proposed 314-power scaling of water transport with estimated above-...
Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei
2009-02-01
A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.
Kowalski, Amanda E.
2015-01-01
Insurance induces a tradeoff between the welfare gains from risk protection and the welfare losses from moral hazard. Empirical work traditionally estimates each side of the tradeoff separately, potentially yielding mutually inconsistent results. I develop a nonlinear budget set model of health insurance that allows for both simultaneously. Nonlinearities in the budget set arise from deductibles, coinsurance rates, and stoplosses that alter moral hazard as well as risk protection. I illustrate the properties of my model by estimating it using data on employer sponsored health insurance from a large firm. PMID:26664035
2014-01-01
Background The AUDIT-C is an extensively validated screen for unhealthy alcohol use (i.e. drinking above recommended limits or alcohol use disorder), which consists of three questions about alcohol consumption. AUDIT-C scores ≥4 points for men and ≥3 for women are considered positive screens based on US validation studies that compared the AUDIT-C to “gold standard” measures of unhealthy alcohol use from independent, detailed interviews. However, results of screening—positive or negative based on AUDIT-C scores—can be inconsistent with reported drinking on the AUDIT-C questions. For example, individuals can screen positive based on the AUDIT-C score while reporting drinking below US recommended limits on the same AUDIT-C. Alternatively, they can screen negative based on the AUDIT-C score while reporting drinking above US recommended limits. Such inconsistencies could complicate interpretation of screening results, but it is unclear how often they occur in practice. Methods This study used AUDIT-C data from respondents who reported past-year drinking on one of two national US surveys: a general population survey (N = 26,610) and a Veterans Health Administration (VA) outpatient survey (N = 467,416). Gender-stratified analyses estimated the prevalence of AUDIT-C screen results—positive or negative screens based on the AUDIT-C score—that were inconsistent with reported drinking (above or below US recommended limits) on the same AUDIT-C. Results Among men who reported drinking, 13.8% and 21.1% of US general population and VA samples, respectively, had screening results based on AUDIT-C scores (positive or negative) that were inconsistent with reported drinking on the AUDIT-C questions (above or below US recommended limits). Among women who reported drinking, 18.3% and 20.7% of US general population and VA samples, respectively, had screening results that were inconsistent with reported drinking. Limitations This study did not include an independent interview gold standard for unhealthy alcohol use and therefore cannot address how often observed inconsistencies represent false positive or negative screens. Conclusions Up to 21% of people who drink alcohol had alcohol screening results based on the AUDIT-C score that were inconsistent with reported drinking on the same AUDIT-C. This needs to be addressed when training clinicians to use the AUDIT-C. PMID:24468406
Friedlander, Alan M.; DeMartini, Edward E.; Schuhbauer, Anna; Schemmel, Eva; Salinas de Léon, Pelayo
2015-01-01
The Galapagos Sailfin grouper, Mycteroperca olfax, locally known as bacalao and listed as vulnerable by the IUCN, is culturally, economically, and ecologically important to the Galapagos archipelago and its people. It is regionally endemic to the Eastern Tropical Pacific, and, while an important fishery resource that has shown substantial declines in recent years, to date no effective management regulations are in place to ensure the sustainability of the Galapagos fishery for this species. Previous estimates of longevity and size at maturity for bacalao are inconsistent with estimates for congeners, which brings into question the accuracy of prior estimates. We set out to assess the age, growth, and reproductive biology of bacalao in order to provide more accurate life history information to inform more effective fisheries management for this species. The oldest fish in our sample was 21 years old, which is 2–3 times greater than previously reported estimates of longevity. Parameter estimates for the von Bertalanffy growth function (k = 0.11, L∞ = 110 cm TL, and to = − 1.7 years) show bacalao to grow much slower and attain substantially larger asymptotic maximum length than previous studies. Mean size at maturity (as female) was estimated at 65.3 cm TL, corresponding to a mean age of 6.5 years. We found that sex ratios were extremely female biased (0.009 M:1F), with a large majority of the individuals in our experimental catch being immature (79%). Our results show that bacalao grow slower, live longer, and mature at a much larger size and greater age than previously thought, with very few mature males in the population. These findings have important implications for the fishery of this valuable species and provide the impetus for a long-overdue species management plan to ensure its long-term sustainability. PMID:26401463
Usseglio, Paolo; Friedlander, Alan M; DeMartini, Edward E; Schuhbauer, Anna; Schemmel, Eva; Salinas de Léon, Pelayo
2015-01-01
The Galapagos Sailfin grouper, Mycteroperca olfax, locally known as bacalao and listed as vulnerable by the IUCN, is culturally, economically, and ecologically important to the Galapagos archipelago and its people. It is regionally endemic to the Eastern Tropical Pacific, and, while an important fishery resource that has shown substantial declines in recent years, to date no effective management regulations are in place to ensure the sustainability of the Galapagos fishery for this species. Previous estimates of longevity and size at maturity for bacalao are inconsistent with estimates for congeners, which brings into question the accuracy of prior estimates. We set out to assess the age, growth, and reproductive biology of bacalao in order to provide more accurate life history information to inform more effective fisheries management for this species. The oldest fish in our sample was 21 years old, which is 2-3 times greater than previously reported estimates of longevity. Parameter estimates for the von Bertalanffy growth function (k = 0.11, L ∞ = 110 cm TL, and to = - 1.7 years) show bacalao to grow much slower and attain substantially larger asymptotic maximum length than previous studies. Mean size at maturity (as female) was estimated at 65.3 cm TL, corresponding to a mean age of 6.5 years. We found that sex ratios were extremely female biased (0.009 M:1F), with a large majority of the individuals in our experimental catch being immature (79%). Our results show that bacalao grow slower, live longer, and mature at a much larger size and greater age than previously thought, with very few mature males in the population. These findings have important implications for the fishery of this valuable species and provide the impetus for a long-overdue species management plan to ensure its long-term sustainability.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-07
... pose to crews, passengers, and bystanders. However, the NTSB notes that propeller blades are designed... intact and in place during normal operation. Propeller blades are not designed or expected to continue to... release of all or a portion of a propeller blade from an aircraft, inconsistent with its design parameters...
NASA Astrophysics Data System (ADS)
Norouzi, H.; Temimi, M.; Turk, J.; Prigent, C.; Furuzawa, F.; Tian, Y.
2013-12-01
Microwave land surface emissivity acts as the background signal to estimate rain rate, cloud liquid water, and total precipitable water. Therefore, its accuracy can directly affect the uncertainty of such measurements. Over land, unlike over oceans, the microwave emissivity is relatively high and and varies significantly as surface conditions and land cover change. Lack of ground truth measurement of microwave emissivity especially on global scale has made the uncertainty analysis of this parameter very challenging. The present study investigates the consistency among the existing global land emissivity estimates from different microwave sensors. The products are determined from various sensors and frequencies ranging from 7 to 90 GHz. The selected emissivity products in this study are from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) by NOAA - Cooperative remote Sensing and Science and Technology Center (CREST), the Special Sensor Microwave Imager (SSM/I) by The Centre National de la Recherche Scientifique (CNRS) in France, TRMM Microwave Imager (TMI) by Nagoya University, Japan, and WindSat by NASA Jet Propulsion Laboratory (JPL). The emissivity estimates are based on different algorithms and ancillary data sets. This work investigates the difference among these emissivity products from 2003 to 2008 dynamically and spectrally. The similarities and discrepancies of the retrievals are studied at different land cover types. The mean relative difference (MRD) and other statistical parameters are calculated temporally for all five years of the study. Some inherent discrepancies between the selected products can be attributed to the difference in geometry in terms of incident angle, spectral response, and the foot print size which can affect the estimations. The results reveal that in lower frequencies (=<19 GHz) ancillary data especially skin temperature data set is the major source of difference in emissivity retrievals, while in higher frequencies (>19 GHz) the residuals of atmospheric effect on the signal cause inconsistency among the products. The time series and correlation between emissivity maps were analyzed over different land classes to assess the consistency of emissivity variations with geophysical variable such as soil moisture, precipitation, and vegetation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siciliano, Edward R.; Ely, James H.; Kouzes, Richard T.
2009-11-01
In recent work at our laboratory, we were re-examining our data and found an inconsistency between the values listed for 137Cs in Table 2 (Siciliano et al. 2008) and results plotted for that source in Figures 11 and 12. In the course of fitting the parabolic function (Equation 4) to the Compton maxima, two ranges of channels were used when determining the parameters for 137Cs. The parabolic fit curve shown in Figure 11 resulted from fitting channels 50 to 70. The parameters for that fit are: are: A = 0.972(12), B = 1.42(24) x 10 -3, and C NO =more » 60.2(5). The parameters for 137Cs listed in Table 2 (and also used to determine the calibration relations in Figure 12—the main result of this paper) came from fitting the 137Cs data in channels 40 to 80. Although the curves plotted from these two different sets of parameters would be visually distinguishable in Figure 11, when incorporated with the other isotope values shown in Figure 12 to obtain the linear energy-channel fit, the 50-70 channel parameter set plus the correction from the Compton maximum to the Compton edge gives a negligible change in the slope [6.470(41) as opposed to the reported 6.454(15) keV/channel] and a small change in the intercept [41(8) as opposed to 47(3) keV] for the dashed line. The conclusions of the article therefore do not change as a result of this inconsistency.« less
Farashi, Sajjad
2017-01-01
Interaction between biological systems and environmental electric or magnetic fields has gained attention during the past few decades. Although there are a lot of studies that have been conducted for investigating such interaction, the reported results are considerably inconsistent. Besides the complexity of biological systems, the important reason for such inconsistent results may arise due to different excitation protocols that have been applied in different experiments. In order to investigate carefully the way that external electric or magnetic fields interact with a biological system, the parameters of excitation, such as intensity or frequency, should be selected purposefully due to the influence of these parameters on the system response. In this study, pancreatic β cell, the main player of blood glucose regulating system, is considered and the study is focused on finding the natural frequency spectrum of the system using modeling approach. Natural frequencies of a system are important characteristics of the system when external excitation is applied. The result of this study can help researchers to select proper frequency parameter for electrical excitation of β cell system. The results show that there are two distinct frequency ranges for natural frequency of β cell system, which consist of extremely low (or near zero) and 100-750 kHz frequency ranges. There are experimental works on β cell exposure to electromagnetic fields that support such finding.
Dia, Aïssata; Marcellin, Fabienne; Bonono, Renée-Cécile; Boyer, Sylvie; Bouhnik, Anne-Déborah; Protopopescu, Camelia; Koulla-Shiro, Sinata; Carrieri, Maria Patrizia; Abé, Claude; Spire, Bruno
2010-04-01
Our study aimed at estimating the prevalence of inconsistent condom use and at identifying its determinants in steady partnerships among people living with HIV/AIDS (PLWHA) in Cameroon. Analyses were based on data collected during the national cross-sectional multicentre survey EVAL (ANRS 12-116), which was conducted in Cameroon between September 2006 and March 2007 among 3151 adult PLWHA diagnosed HIV-positive for at least 3 months. The study population consisted of the 907 survey participants who reported sexual activity during the previous 3 months, with a steady partner either HIV-negative or of unknown HIV status. Logistic regression was used to identify factors associated with individuals' report of inconsistent condom use during the previous 3 months. Inconsistent condom use was reported by 35.3% of sexually active PLWHA. In a multivariate analysis adjusted for socio-demographic characteristics, not receiving antiretroviral therapy (OR (95% CI): 2.28 (1.64 to 3.18)) was independently associated with inconsistent condom use. The prevalence of unsafe sex remains high among sexually active PLWHA in Cameroon. Treatment with antiretroviral therapy is identified as a factor associated with safer sex, which further encourages the continuation of the national policy for increasing access to HIV treatment and care, and underlines the need to develop counselling strategies for all patients.
NASA Technical Reports Server (NTRS)
Treiman, A. H.
1993-01-01
The composition of the parent magma of the Nakhla meteorite was difficult to determine, because it is accumulate rock, enriched in olivine and augite relative to a basalt magma. A parent magma composition is estimated from electron microprobe area analyses of magmatic inclusions in olivine. This composition is consistent with an independent estimate based on the same inclusions, and with chemical equilibria with the cores of Nakhla's augites. This composition reconciles most of the previous estimates of Nakhla's magma composition, and obviates the need for complex magmatic processes. Inconsistency between this composition and those calculated previously suggests that magma flowed through and crystallized into Nakhla as it cooled.
McAlpine, Alys; Hossain, Mazeda; Zimmerman, Cathy
2016-12-28
Sex trafficking and sexual exploitation has been widely reported, especially in conflict-affected settings, which appear to increase women's and children's vulnerabilities to these extreme abuses. We conducted a systematic search of ten databases and extensive grey literature to gather evidence of sex trafficking and sexual exploitation in conflict-affected settings. International definitions of "sexual exploitation" and "sex trafficking" set the indicator parameters. We focused on sexual exploitation in forms of early or forced marriage, forced combatant sexual exploitation and sexual slavery. We extracted prevalence measures, health outcomes and sexual exploitation terminology definitions. The review adhered to PRISMA guidelines and includes quality appraisal. The search identified 29 eligible papers with evidence of sex trafficking and sexual exploitation in armed conflict settings in twelve countries in Africa, Asia, and the Middle East. The evidence was limited and not generalizable, due to few prevalence estimates and inconsistent definitions of "sexual exploitation". The prevalence estimates available indicate that females were more likely than males to be victims of sexual exploitation in conflict settings. In some settings, as many as one in four forced marriages took place before the girls reached 18 years old. Findings suggest that the vast majority of former female combatants were sexually exploited during the conflict. These studies provided various indicators of sexual exploitation compatible to the United Nation's definition of sex trafficking, but only 2 studies identified the exploitation as trafficking. None of the studies solely aimed to measure the prevalence of sex trafficking or sexual exploitation. Similar descriptions of types of sexual exploitation and trafficking were found, but the inconsistent terminology or measurements inhibited a meta-analysis. Findings indicate there are various forms of human trafficking and sexual exploitation in conflict-affected settings, primarily occurring as early or forced marriage, forced combatant sexual exploitation, and sexual slavery. The studies highlight the extraordinary vulnerability of women and girls to these extreme abuses. Simultaneously, this review suggests the need to clarify terminology around sex trafficking in conflict to foster a more cohesive future evidence-base, and in particular, robust prevalence figures from conflict-affected and displaced populations.
ERIC Educational Resources Information Center
Steinmayr, Ricarda; Beauducel, Andre; Spinath, Birgit
2010-01-01
Recently, different methodological approaches have been discussed as an explanation for inconsistencies in studies investigating sex differences in different intelligences. The present study investigates sex differences in manifest sum scores, factor score estimates, and latent verbal, numerical, figural intelligence, as well as fluid and…
The Economic Consequences of Being Left-Handed: Some Sinister Results
ERIC Educational Resources Information Center
Denny, Kevin; O' Sullivan, Vincent
2007-01-01
This paper estimates the effects of handedness on earnings. Augmenting a conventional earnings equation with an indicator of left-handedness shows there is a positive effect on male earnings with manual workers enjoying a slightly larger premium. These results are inconsistent with the view that left-handers in general are handicapped either…
Broad and Inconsistent Muscle Food Classification Is Problematic for Dietary Guidance in the U.S.
O’Connor, Lauren E.; Campbell, Wayne W.; Woerner, Dale R.; Belk, Keith E.
2017-01-01
Dietary recommendations regarding consumption of muscle foods, such as red meat, processed meat, poultry or fish, largely rely on current dietary intake assessment methods. This narrative review summarizes how U.S. intake values for various types of muscle foods are grouped and estimated via methods that include: (1) food frequency questionnaires; (2) food disappearance data from the U.S. Department of Agriculture Economic Research Service; and (3) dietary recall information from the National Health and Nutrition Examination Survey data. These reported methods inconsistently classify muscle foods into groups, such as those previously listed, which creates discrepancies in estimated intakes. Researchers who classify muscle foods into these groups do not consistently considered nutrient content, in turn leading to implications of scientific conclusions and dietary recommendations. Consequentially, these factors demonstrate a need for a more universal muscle food classification system. Further specification to this system would improve accuracy and precision in which researchers can classify muscle foods in nutrition research. Future multidisciplinary collaboration is needed to develop a new classification system via systematic review protocol of current literature. PMID:28926963
Melvin, Steven D; Petit, Marie A; Duvignacq, Marion C; Sumpter, John P
2017-08-01
The quality and reproducibility of science has recently come under scrutiny, with criticisms spanning disciplines. In aquatic toxicology, behavioural tests are currently an area of controversy since inconsistent findings have been highlighted and attributed to poor quality science. The problem likely relates to limitations to our understanding of basic behavioural patterns, which can influence our ability to design statistically robust experiments yielding ecologically relevant data. The present study takes a first step towards understanding baseline behaviours in fish, including how basic choices in experimental design might influence behavioural outcomes and interpretations in aquatic toxicology. Specifically, we explored how fish acclimate to behavioural arenas and how different lengths of observation time impact estimates of basic swimming parameters (i.e., average, maximum and angular velocity). We performed a semi-quantitative literature review to place our findings in the context of the published literature describing behavioural tests with fish. Our results demonstrate that fish fundamentally change their swimming behaviour over time, and that acclimation and observational timeframes may therefore have implications for influencing both the ecological relevance and statistical robustness of behavioural toxicity tests. Our review identified 165 studies describing behavioural responses in fish exposed to various stressors, and revealed that the majority of publications documenting fish behavioural responses report extremely brief acclimation times and observational durations, which helps explain inconsistencies identified across studies. We recommend that researchers applying behavioural tests with fish, and other species, apply a similar framework to better understand baseline behaviours and the implications of design choices for influencing study outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Human microdose evaluation of the novel EP1 receptor antagonist GSK269984A.
Ostenfeld, Thor; Beaumont, Claire; Bullman, Jonathan; Beaumont, Maria; Jeffrey, Phillip
2012-12-01
The primary objective was to evaluate the pharmacokinetics (PK) of the novel EP(1) antagonist GSK269984A in human volunteers after a single oral and intravenous (i.v.) microdose (100 µg). GSK269984A was administered to two groups of healthy human volunteers as a single oral (n= 5) or i.v. (n= 5) microdose (100 µg). Blood samples were collected for up to 24 h and the parent drug concentrations were measured in separated plasma using a validated high pressure liquid chromatography-tandem mass spectrometry method following solid phase extraction. Following the i.v. microdose, the geometric mean values for clearance (CL), steady-state volume of distribution (V(ss) ) and terminal elimination half-life (t(1/2) ) of GSK269984A were 9.8 l h(-1) , 62.8 l and 8.2 h. C(max) and AUC(0,∞) were 3.2 ng ml(-1) and 10.2 ng ml(-1) h, respectively; the corresponding oral parameters were 1.8 ng ml(-1) and 9.8 ng ml(-1) h, respectively. Absolute oral bioavailability was estimated to be 95%. These data were inconsistent with predictions of human PK based on allometric scaling of in vivo PK data from three pre-clinical species (rat, dog and monkey). For drug development programmes characterized by inconsistencies between pre-clinical in vitro metabolic and in vivo PK data, and where uncertainty exists with respect to allometric predictions of the human PK profile, these data support the early application of a human microdose study to facilitate the selection of compounds for further clinical development. © 2012 The Authors. British Journal of Clinical Pharmacology © 2012 The British Pharmacological Society.
Two essays on environmental and food security
NASA Astrophysics Data System (ADS)
Jeanty, Pierre Wilner
The first essay of this dissertation, "estimating non-market economic benefits of using biodiesel fuel: a stochastic double bounded approach", is an attempt to incorporate uncertainty into double bounded dichotomous choice contingent valuation. The double bounded approach, which entails asking respondents a follow-up question after they have answered a first question, has emerged as a means to increase efficiency in willingness to pay (WTP) estimates. However, several studies have found inconsistency between WTP estimates generated by the first and second questions. In this study, it is posited that this inconsistency is due to uncertainty facing the respondents when the second question is introduced. The author seeks to understand whether using a follow-up question in a stochastic format, which allows respondents to express uncertainty, would alleviate the inconsistency problem. In a contingent valuation survey to estimate non-market economic benefits of using more biodiesel vs. petroleum diesel fuel in an airshed encompassing South Eastern and Central Ohio, it is found that the gap between WTP estimates produced by the first and the second questions reduces when respondents are allowed to express uncertainty. The proposed stochastic follow-up approach yields more efficient WTP estimates than the conventional follow-up approach while maintaining efficiency gain over the single bounded model. From a methodological standpoint, this study distinguishes from previous research by being the first to implement a double bounded contingent valuation survey with a stochastic follow-up question. In the second essay, "analyzing the effects of civil wars and violent conflicts on food security in developing countries: an instrumental variable panel data approach", instrumental variable panel data techniques are applied to estimate the effects of civil wars and violent conflicts on food security in a sample of 73 developing countries from 1970 to 2002. The number of hungry in the developing countries has been rampant in the past several years. Civil wars and violent conflicts have been associated with food insecurity. The study aims to provide empirical evidence as to whether the manifest increase in the number of hungry can be ascribed to civil unrest. From a statistical standpoint, the results convincingly pinpoint the danger of using conventional panel data estimators when endogeneity is of the conventional simultaneous equation type, i.e. with respect to the idiosyncratic error term. From a policy viewpoint, it is found that, in general, civil wars and conflicts are detrimental to food security. However, more vulnerable are countries unable to make available for their citizens the minimum dietary energy requirements under which a country is qualified for food aid. Policies aiming at curbing food insecurity in developing countries need to take into account this difference.
Soler-Hampejsek, Erica; Grant, Monica J; Mensch, Barbara S; Hewett, Paul C; Rankin, Johanna
2013-08-01
Reliable data on sexual behavior are needed to identify adolescents at risk of acquiring human immunodeficiency virus or other sexually transmitted diseases, as well as unintended pregnancies. This study aimed to investigate whether schooling status and literacy and numeracy skills affect adolescents' reports of premarital sex, collected using audio computer-assisted self-interviews. We analyzed data on 2,320 participants in the first three rounds of the Malawi Schooling and Adolescent Study to estimate the level of inconsistency in reporting premarital sex among rural Malawian adolescents. We used multivariate logistic regressions to examine the relationships between school status and academic skills and premarital sexual behavior reports. Males were more likely than females to report premarital sex at baseline, whereas females were more likely than males to report sex inconsistently within and across rounds. School-going females and males were more likely to report never having had sex at baseline and to retract reports of ever having sex across rounds than were their peers who had recently left school. School-going females were also more likely to report sex inconsistently at baseline. Literate and numerate respondents were less likely to report sex inconsistently at baseline; however, they were more likely to retract sex reports across rounds. The level of inconsistency both within a survey round and across rounds reflects the difficulties in collecting reliable sexual behavior data from young people in settings such as rural Malawi, where education levels are low and sex among school-going females is not socially accepted. Copyright © 2013 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Peter, Richard; Gässler, Holger; Geyer, Siegfried
2007-01-01
Background Inconsistency in social status and its impact on health have been a focus of research 30–40 years ago. Yet, there is little recent information on it's association with ischaemic heart disease (IHD) morbidity and IHD is still defined as one of the major health problems in socioeconomically developed societies. Methods A secondary analysis of prospective historical data from 68 805 male and female members of a statutory German health insurance company aged 25–65 years was conducted. Data included information on sociodemographic variables, social status indicators (education, occupational grade and income) and hospital admissions because of IHD. Results Findings from Cox regression analysis showed an increased risk for IHD in the group with the highest educational level, whereas the lowest occupational and income groups had the highest hazard ratio (HR). Further analysis revealed that after adjustment for income status inconsistency (defined by the combination of higher educational level with lower occupational status) accounts for increased risk of IHD (HR for men, 3.14 and for women, 3.63). An association of similar strength was observed regarding high education/low income in women (HR 3.53). The combination of low education with high income reduced the risk among men (HR 0.29). No respective findings were observed concerning occupational group and income. Conclusions Status inconsistency is associated with the risk of IHD as well as single traditional indicators of socioeconomic position. Information on status inconsistency should be measured in addition to single indicators of socioeconomic status to achieve a more appropriate estimation of the risk of IHD. PMID:17568052
Alexander, Paul E; Brito, Juan P; Neumann, Ignacio; Gionfriddo, Michael R; Bero, Lisa; Djulbegovic, Benjamin; Stoltzfus, Rebecca; Montori, Victor M; Norris, Susan L; Schünemann, Holger J; Guyatt, Gordon H
2016-04-01
In 2007 the World Health Organization (WHO) adopted the GRADE system for development of public health guidelines. Previously we found that many strong recommendations issued by WHO are based on evidence for which there is only low or very low confidence in the estimates of effect (discordant recommendations). GRADE guidance indicates that such discordant recommendations are rarely appropriate but suggests five paradigmatic situations in which discordant recommendations may be warranted. We sought to provide insight into the many discordant recommendations in WHO guidelines. We examined all guidelines that used the GRADE method and were approved by the WHO Guideline Review Committee between 2007 and 2012. Teams of reviewers independently abstracted data from eligible guidelines and classified recommendations either into one of the five paradigms for appropriately-formulated discordant recommendations or into three additional categories in which discordant recommendations were inconsistent with GRADE guidance: 1) the evidence warranted moderate or high confidence (a misclassification of evidence) rather than low or very low confidence; 2) good practice statements; or 3) uncertainty in the estimates of effect would best lead to a conditional (weak) recommendation. The 33 eligible guidelines included 160 discordant recommendations, of which 98 (61.3%) addressed drug interventions and 132 (82.5%) provided some rationale (though not entirely explicit at times) for the strong recommendation. Of 160 discordant recommendations, 25 (15.6%) were judged consistent with one of the five paradigms for appropriate recommendations; 33 (21%) were based on evidence warranting moderate or high confidence in the estimates of effect; 29 (18%) were good practice statements; and 73 (46%) warranted a conditional, rather than a strong recommendation. WHO discordant recommendations are often inconsistent with GRADE guidance, possibly threatening the integrity of the process. Further training in GRADE methods for WHO guideline development group members may be necessary, along with further research on what motivates the formulation of such recommendations. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Barella-Ortiz, Anaïs; Polcher, Jan; de Rosnay, Patricia; Piles, Maria; Gelati, Emiliano
2017-01-01
L-band radiometry is considered to be one of the most suitable techniques to estimate surface soil moisture (SSM) by means of remote sensing. Brightness temperatures are key in this process, as they are the main input in the retrieval algorithm which yields SSM estimates. The work exposed compares brightness temperatures measured by the SMOS mission to two different sets of modelled ones, over the Iberian Peninsula from 2010 to 2012. The two modelled sets were estimated using a radiative transfer model and state variables from two land-surface models: (i) ORCHIDEE and (ii) H-TESSEL. The radiative transfer model used is the CMEM. Measured and modelled brightness temperatures show a good agreement in their temporal evolution, but their spatial structures are not consistent. An empirical orthogonal function analysis of the brightness temperature's error identifies a dominant structure over the south-west of the Iberian Peninsula which evolves during the year and is maximum in autumn and winter. Hypotheses concerning forcing-induced biases and assumptions made in the radiative transfer model are analysed to explain this inconsistency, but no candidate is found to be responsible for the weak spatial correlations at the moment. Further hypotheses are proposed and will be explored in a forthcoming paper. The analysis of spatial inconsistencies between modelled and measured TBs is important, as these can affect the estimation of geophysical variables and TB assimilation in operational models, as well as result in misleading validation studies.
Applying modern measurements of Pleistocene loads to model lithospheric rheology
NASA Astrophysics Data System (ADS)
Beard, E. P.; Hoggan, J. R.; Lowry, A. R.
2011-12-01
The remnant shorelines of Pleistocene Lake Bonneville provide a unique opportunity for building a dataset from which to infer rheological properties of the lower crust and upper mantle. Multiple lakeshores developed over a period of around 30 kyr which record the lithosphere's isostatic response to a well-constrained load history. Bills et al. (1994) utilized a shoreline elevation dataset compiled by Currey (1982) in an attempt to model linear (Maxwell) viscosity as a function of depth beneath the basin. They estimated an effective elastic thickness (Te) for the basin of 20-25 km which differs significantly from the 5-15 km estimates derived from models of loading on geologic timescales (e.g., Lowry and Pérez-Gussinyé, 2011). We propose that the discrepancy in Te modeled by these two approaches may be resolved with dynamical modeling of a common rheology, using a more complete shoreline elevation dataset applied to a spherical Earth model. Where Currey's (1982) dataset was compiled largely from observations of depositional shoreline features, we are developing an algorithm for estimating elevation variations in erosional shorelines based on cross-correlation and stacking techniques similar to those used to automate picking of seismic phase arrival times. Application of this method to digital elevation models (DEMs) will increase the size and accuracy of the shoreline elevation dataset, enabling more robust modeling of the rheological properties driving isostatic response to unloading of Lake Bonneville. Our plan is to model these data and invert for a relatively small number of parameters describing depth- and temperature-dependent power-law rheology of the lower crust and upper mantle. These same parameters also will be used to model topographic and Moho response to estimates of regional mass variation on the longer loading timescales to test for inconsistencies. Bills, B.G., D.R. Currey, and G.A. Marshall, 1994, Viscosity estimates for the crust and upper mantle from patterns of lacustrine shoreline deformation in the Eastern Great Basin, Journal of Geophysical Research, 99, B11, 22,059-22,086. Currey, D.R., 1982, Lake Bonneville: Selected features of relevance to neotectonic analysis, U.S. Geological Survey Open File Report, 82-1070, 31pp. Lowry, A.R., and M. Pérez-Gussinyé, 2011, The role of crustal quartz in controlling Cordilleran deformation, Nature, 471, pp. 353-357.
A method to combine spaceborne radar and radiometric observations of precipitation
NASA Astrophysics Data System (ADS)
Munchak, Stephen Joseph
This dissertation describes the development and application of a combined radar-radiometer rainfall retrieval algorithm for the Tropical Rainfall Measuring Mission (TRMM) satellite. A retrieval framework based upon optimal estimation theory is proposed wherein three parameters describing the raindrop size distribution (DSD), ice particle size distribution (PSD), and cloud water path (cLWP) are retrieved for each radar profile. The retrieved rainfall rate is found to be strongly sensitive to the a priori constraints in DSD and cLWP; thus, these parameters are tuned to match polarimetric radar estimates of rainfall near Kwajalein, Republic of Marshall Islands. An independent validation against gauge-tuned radar rainfall estimates at Melbourne, FL shows agreement within 2% which exceeds previous algorithms' ability to match rainfall at these two sites. The algorithm is then applied to two years of TRMM data over oceans to determine the sources of DSD variability. Three correlated sets of variables representing storm dynamics, background environment, and cloud microphysics are found to account for approximately 50% of the variability in the absolute and reflectivity-normalized median drop size. Structures of radar reflectivity are also identified and related to drop size, with these relationships being confirmed by ground-based polarimetric radar data from the North American Monsoon Experiment (NAME). Regional patterns of DSD and the sources of variability identified herein are also shown to be consistent with previous work documenting regional DSD properties. In particular, mid-latitude regions and tropical regions near land tend to have larger drops for a given reflectivity, whereas the smallest drops are found in the eastern Pacific Intertropical Convergence Zone. Due to properties of the DSD and rain water/cloud water partitioning that change with column water vapor, it is shown that increases in water vapor in a global warming scenario could lead to slight (1%) underestimates of a rainfall trends by radar but larger overestimates (5%) by radiometer algorithms. Further analyses are performed to compare tropical oceanic mean rainfall rates between the combined algorithm and other sources. The combined algorithm is 15% higher than the version 6 of the 2A25 radar-only algorithm and 6.6% higher than the Global Precipitation Climatology Project (GPCP) estimate for the same time-space domain. Despite being higher than these two sources, the combined total is not inconsistent with estimates of the other components of the energy budget given their uncertainties.
Modeling of the UAE Wind Turbine for Refinement of FAST{_}AD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonkman, J. M.
The Unsteady Aerodynamics Experiment (UAE) research wind turbine was modeled both aerodynamically and structurally in the FAST{_}AD wind turbine design code, and its response to wind inflows was simulated for a sample of test cases. A study was conducted to determine why wind turbine load magnitude discrepancies-inconsistencies in aerodynamic force coefficients, rotor shaft torque, and out-of-plane bending moments at the blade root across a range of operating conditions-exist between load predictions made by FAST{_}AD and other modeling tools and measured loads taken from the actual UAE wind turbine during the NASA-Ames wind tunnel tests. The acquired experimental test data representmore » the finest, most accurate set of wind turbine aerodynamic and induced flow field data available today. A sample of the FAST{_}AD model input parameters most critical to the aerodynamics computations was also systematically perturbed to determine their effect on load and performance predictions. Attention was focused on the simpler upwind rotor configuration, zero yaw error test cases. Inconsistencies in input file parameters, such as aerodynamic performance characteristics, explain a noteworthy fraction of the load prediction discrepancies of the various modeling tools.« less
NASA Astrophysics Data System (ADS)
Feng, Bin; Shi, Zelin; Zhang, Chengshuo; Xu, Baoshu; Zhang, Xiaodong
2016-05-01
The point spread function (PSF) inconsistency caused by temperature variation leads to artifacts in decoded images of a wavefront coding infrared imaging system. Therefore, this paper proposes an analytical model for the effect of temperature variation on the PSF consistency. In the proposed model, a formula for the thermal deformation of an optical phase mask is derived. This formula indicates that a cubic optical phase mask (CPM) is still cubic after thermal deformation. A proposed equivalent cubic phase mask (E-CPM) is a virtual and room-temperature lens which characterizes the optical effect of temperature variation on the CPM. Additionally, a calculating method for PSF consistency after temperature variation is presented. Numerical simulation illustrates the validity of the proposed model and some significant conclusions are drawn. Given the form parameter, the PSF consistency achieved by a Ge-material CPM is better than the PSF consistency by a ZnSe-material CPM. The effect of the optical phase mask on PSF inconsistency is much slighter than that of the auxiliary lens group. A large form parameter of the CPM will introduce large defocus-insensitive aberrations, which improves the PSF consistency but degrades the room-temperature MTF.
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Toward Detection of Exoplanetary Rings via Transit Photometry: Methodology and a Possible Candidate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aizawa, Masataka; Masuda, Kento; Suto, Yasushi
The detection of a planetary ring of exoplanets remains one of the most attractive, but challenging, goals in the field of exoplanetary science. We present a methodology that implements a systematic search for exoplanetary rings via transit photometry of long-period planets. This methodology relies on a precise integration scheme that we develop to compute a transit light curve of a ringed planet. We apply the methodology to 89 long-period planet candidates from the Kepler data so as to estimate, and/or set upper limits on, the parameters of possible rings. While the majority of our samples do not have sufficient signal-to-noise ratios (S/Ns) to place meaningfulmore » constraints on ring parameters, we find that six systems with higher S/Ns are inconsistent with the presence of a ring larger than 1.5 times the planetary radius, assuming a grazing orbit and a tilted ring. Furthermore, we identify five preliminary candidate systems whose light curves exhibit ring-like features. After removing four false positives due to the contamination from nearby stars, we identify KIC 10403228 as a reasonable candidate for a ringed planet. A systematic parameter fit of its light curve with a ringed planet model indicates two possible solutions corresponding to a Saturn-like planet with a tilted ring. There also remain two other possible scenarios accounting for the data; a circumstellar disk and a hierarchical triple. Due to large uncertain factors, we cannot choose one specific model among the three.« less
Kobayashi, Ihori; Huntley, Edward; Lavela, Joseph; Mellman, Thomas A
2012-07-01
Although reports of sleep disturbances are common among individuals with posttraumatic stress disorder (PTSD), results of polysomnographic (PSG) studies have inconsistently documented abnormalities and have therefore suggested "sleep state misperception." The authors' study objectives were to compare sleep parameters measured objectively and subjectively in the laboratory and at home in civilians with and without trauma exposure and PTSD. Cross-sectional study. PSG recordings in a sleep laboratory and actigraphic recordings in participants' homes. One hundred three urban-residing African Americans with and without trauma exposure and PTSD who participated in a larger study. N/A. Sleep parameters (total sleep time [TST], sleep onset latency [SOL], and wake after sleep onset [WASO]) were assessed using laboratory PSG and home actigraphy. A sleep diary was completed in the morning after PSG and actigraphy recordings. Habitual TST, SOL, and WASO were assessed using a sleep questionnaire. The Clinician Administered PTSD Scale was administered to assess participants' trauma exposure and PTSD diagnostic status. Participants, regardless of their trauma exposure/PTSD status, underestimated WASO in the diary and questionnaire relative to actigraphy and overestimated SOL in the diary relative to PSG. Among participants with current PTSD, TST diary estimates did not differ from the actigraphy measure in contrast with those without current PTSD who overestimated TST. No other significant group differences in discrepancies between subjective and objective sleep measures were found. Discrepancies between subjectively and objectively measured sleep parameters were not associated with trauma exposure or PTSD. This challenges prior assertions that individuals with PTSD overreport their sleep disturbances.
Learning-based Wind Estimation using Distant Soundings for Unguided Aerial Delivery
NASA Astrophysics Data System (ADS)
Plyler, M.; Cahoy, K.; Angermueller, K.; Chen, D.; Markuzon, N.
2016-12-01
Delivering unguided, parachuted payloads from aircraft requires accurate knowledge of the wind field inside an operational zone. Usually, a dropsonde released from the aircraft over the drop zone gives a more accurate wind estimate than a forecast. Mission objectives occasionally demand releasing the dropsonde away from the drop zone, but still require accuracy and precision. Barnes interpolation and many other assimilation methods do poorly when the forecast error is inconsistent in a forecast grid. A machine learning approach can better leverage non-linear relations between different weather patterns and thus provide a better wind estimate at the target drop zone when using data collected up to 100 km away. This study uses the 13 km resolution Rapid Refresh (RAP) dataset available through NOAA and subsamples to an area around Yuma, AZ and up to approximately 10km AMSL. RAP forecast grids are updated with simulated dropsondes taken from analysis (historical weather maps). We train models using different data mining and machine learning techniques, most notably boosted regression trees, that can accurately assimilate the distant dropsonde. The model takes a forecast grid and simulated remote dropsonde data as input and produces an estimate of the wind stick over the drop zone. Using ballistic winds as a defining metric, we show our data driven approach does better than Barnes interpolation under some conditions, most notably when the forecast error is different between the two locations, on test data previously unseen by the model. We study and evaluate the model's performance depending on the size, the time lag, the drop altitude, and the geographic location of the training set, and identify parameters most contributing to the accuracy of the wind estimation. This study demonstrates a new approach for assimilating remotely released dropsondes, based on boosted regression trees, and shows improvement in wind estimation over currently used methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subashi, Ergys; Choudhury, Kingshuk R.; Johnson, G. Allan, E-mail: gjohnson@duke.edu
2014-03-15
Purpose: The pharmacokinetic parameters derived from dynamic contrast-enhanced (DCE) MRI have been used in more than 100 phase I trials and investigator led studies. A comparison of the absolute values of these quantities requires an estimation of their respective probability distribution function (PDF). The statistical variation of the DCE-MRI measurement is analyzed by considering the fundamental sources of error in the MR signal intensity acquired with the spoiled gradient-echo (SPGR) pulse sequence. Methods: The variance in the SPGR signal intensity arises from quadrature detection and excitation flip angle inconsistency. The noise power was measured in 11 phantoms of contrast agentmore » concentration in the range [0–1] mM (in steps of 0.1 mM) and in onein vivo acquisition of a tumor-bearing mouse. The distribution of the flip angle was determined in a uniform 10 mM CuSO{sub 4} phantom using the spin echo double angle method. The PDF of a wide range of T1 values measured with the varying flip angle (VFA) technique was estimated through numerical simulations of the SPGR equation. The resultant uncertainty in contrast agent concentration was incorporated in the most common model of tracer exchange kinetics and the PDF of the derived pharmacokinetic parameters was studied numerically. Results: The VFA method is an unbiased technique for measuringT1 only in the absence of bias in excitation flip angle. The time-dependent concentration of the contrast agent measured in vivo is within the theoretically predicted uncertainty. The uncertainty in measuring K{sup trans} with SPGR pulse sequences is of the same order, but always higher than, the uncertainty in measuring the pre-injection longitudinal relaxation time (T1{sub 0}). The lowest achievable bias/uncertainty in estimating this parameter is approximately 20%–70% higher than the bias/uncertainty in the measurement of the pre-injection T1 map. The fractional volume parameters derived from the extended Tofts model were found to be extremely sensitive to the variance in signal intensity. The SNR of the pre-injection T1 map indicates the limiting precision with which K{sup trans} can be calculated. Conclusions: Current small-animal imaging systems and pulse sequences robust to motion artifacts have the capacity for reproducible quantitative acquisitions with DCE-MRI. In these circumstances, it is feasible to achieve a level of precision limited only by physiologic variability.« less
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
Mendy, Angelico; Gasana, Janvier; Forno, Erick; Vieira, Edgar Ramos; Dowdye, Charissa
2012-05-01
Research on the respiratory effect of exposure to solder fumes in electronics workers has been conducted since the 1970s, but has yielded inconsistent results. The aim of this meta-analysis was to clarify the potential association. Effect sizes with corresponding 95% confidence intervals (CIs) for odds of respiratory symptoms related to soldering and spirometric parameters of solderers were extracted from seven studies and pooled to generate summary estimates and standardized mean differences in lung function measures between exposed persons and controls. Soldering was positively associated with wheeze after controlling for smoking (meta-odds ratio: 2.60, 95% CI: 1.46, 4.63) and with statistically significant reductions in forced expiratory volume in 1 s (FEV1) (-0.88%, 95% CI: -1.51, -0.26), forced vital capacity (FVC) (-0.64%, 95% CI: -1.18, -0.10), and FEV1/FVC (-0.35%, 95% CI: -0.65, -0.05). However, lung function parameters of solderers were within normal ranges [pooled mean FEV1: 97.85 (as percent of predicted), 95% CI: 94.70, 100.95, pooled mean FVC: 94.92 (as percent of predicted), 95% CI: 81.21, 108.64, and pooled mean FEV1/FVC: 86.5 (as percent), 95% CI: 78.01, 94.98]. Soldering may be a risk factor for wheeze, but may not be associated with a clinically significant impairment of lung function among electronics workers.
Modeling the hypothalamus-pituitary-adrenal axis: A review and extension.
Hosseinichimeh, Niyousha; Rahmandad, Hazhir; Wittenborn, Andrea K
2015-10-01
Multiple models of the hypothalamus-pituitary-adrenal (HPA) axis have been developed to characterize the oscillations seen in the hormone concentrations and to examine HPA axis dysfunction. We reviewed the existing models, then replicated and compared five of them by finding their correspondence to a dataset consisting of ACTH and cortisol concentrations of 17 healthy individuals. We found that existing models use different feedback mechanisms, vary in the level of details and complexities, and offer inconsistent conclusions. None of the models fit the validation dataset well. Therefore, we re-calibrated the best performing model using partial calibration and extended the model by adding individual fixed effects and an exogenous circadian function. Our estimated parameters reduced the mean absolute percent error significantly and offer a validated reference model that can be used in diverse applications. Our analysis suggests that the circadian and ultradian cycles are not created endogenously by the HPA axis feedbacks, which is consistent with the recent literature on the circadian clock and HPA axis. Copyright © 2015 Elsevier Inc. All rights reserved.
Galloway, D.L.; Hudnut, K.W.; Ingebritsen, S.E.; Phillips, S.P.; Peltzer, G.; Rogez, F.; Rosen, P.A.
1998-01-01
Interferometric synthetic aperture radar (InSAR) has great potential to detect and quantify land subsidence caused by aquifer system compaction. InSAR maps with high spatial detail and resolution of range displacement (±10 mm in change of land surface elevation) were developed for a groundwater basin (∼103 km2) in Antelope Valley, California, using radar data collected from the ERS-1 satellite. These data allow comprehensive comparison between recent (1993–1995) subsidence patterns and those detected historically (1926–1992) by more traditional methods. The changed subsidence patterns are generally compatible with recent shifts in land and water use. The InSAR-detected patterns are generally consistent with predictions based on a coupled model of groundwater flow and aquifer system compaction. The minor inconsistencies may reflect our imperfect knowledge of the distribution and properties of compressible sediments. When used in conjunction with coincident measurements of groundwater levels and other geologic information, InSAR data may be useful for constraining parameter estimates in simulations of aquifer system compaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benyamin, David; Piran, Tsvi; Shaviv, Nir J.
The boron to carbon (B/C) and sub-Fe/Fe ratios provide an important clue on cosmic ray (CR) propagation within the Galaxy. These ratios estimate the grammage that the CRs traverse as they propagate from their sources to Earth. Attempts to explain these ratios within the standard CR propagation models require ad hoc modifications and even with those these models necessitate inconsistent grammages to explain both ratios. As an alternative, physically motivated model, we have proposed that CRs originate preferably within the galactic spiral arms. CR propagation from dynamic spiral arms has important imprints on various secondary to primary ratios, such asmore » the B/C ratio and the positron fraction. We use our spiral-arm diffusion model with the spallation network extended up to nickel to calculate the sub-Fe/Fe ratio. We show that without any additional parameters the spiral-arm model consistently explains both ratios with the same grammage, providing further evidence in favor of this model.« less
Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.
Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C.
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study. PMID:29772007
NASA Astrophysics Data System (ADS)
Edwards, Brian J.
2002-05-01
Given the premise that a set of dynamical equations must possess a definite, underlying mathematical structure to ensure local and global thermodynamic stability, as has been well documented, several different models for describing liquid crystalline dynamics are examined with respect to said structure. These models, each derived during the past several years using a specific closure approximation for the fourth moment of the distribution function in Doi's rigid rod theory, are all shown to be inconsistent with this basic mathematical structure. The source of this inconsistency lies in Doi's expressions for the extra stress tensor and temporal evolution of the order parameter, which are rederived herein using a transformation that allows for internal compatibility with the underlying mathematical structure that is present on the distribution function level of description.
Specific gravity and wood moisture variation of white pine
Glenn L. Gammon
1969-01-01
A report on results of a study to develop a means for estimating specific gravity and wood moisture content of white pine. No strong relationships were found by using either the single or combined factors of age and dimensional stem characteristics. Inconsistent patterns of specific gravity and moisture over height in tree are graphically illustrated.
ERIC Educational Resources Information Center
Bourne, Compton; Dass, Anand
2003-01-01
Estimates private and social rates of return for university science and technology graduates in Trinidad and Tobago. Makes comparisons with other fields of study such as agriculture, natural sciences, engineering, and humanities. Concludes that rates of return are inconsistent with the allocative preferences of policymakers. (Authors/PKP)
Riley, Pete; Ben-Nun, Michal; Armenta, Richard; Linker, Jon A; Eick, Angela A; Sanchez, Jose L; George, Dylan; Bacon, David P; Riley, Steven
2013-01-01
Rapidly characterizing the amplitude and variability in transmissibility of novel human influenza strains as they emerge is a key public health priority. However, comparison of early estimates of the basic reproduction number during the 2009 pandemic were challenging because of inconsistent data sources and methods. Here, we define and analyze influenza-like-illness (ILI) case data from 2009-2010 for the 50 largest spatially distinct US military installations (military population defined by zip code, MPZ). We used publicly available data from non-military sources to show that patterns of ILI incidence in many of these MPZs closely followed the pattern of their enclosing civilian population. After characterizing the broad patterns of incidence (e.g. single-peak, double-peak), we defined a parsimonious SIR-like model with two possible values for intrinsic transmissibility across three epochs. We fitted the parameters of this model to data from all 50 MPZs, finding them to be reasonably well clustered with a median (mean) value of 1.39 (1.57) and standard deviation of 0.41. An increasing temporal trend in transmissibility ([Formula: see text], p-value: 0.013) during the period of our study was robust to the removal of high transmissibility outliers and to the removal of the smaller 20 MPZs. Our results demonstrate the utility of rapidly available - and consistent - data from multiple populations.
Riley, Pete; Ben-Nun, Michal; Armenta, Richard; Linker, Jon A.; Eick, Angela A.; Sanchez, Jose L.; George, Dylan; Bacon, David P.; Riley, Steven
2013-01-01
Rapidly characterizing the amplitude and variability in transmissibility of novel human influenza strains as they emerge is a key public health priority. However, comparison of early estimates of the basic reproduction number during the 2009 pandemic were challenging because of inconsistent data sources and methods. Here, we define and analyze influenza-like-illness (ILI) case data from 2009–2010 for the 50 largest spatially distinct US military installations (military population defined by zip code, MPZ). We used publicly available data from non-military sources to show that patterns of ILI incidence in many of these MPZs closely followed the pattern of their enclosing civilian population. After characterizing the broad patterns of incidence (e.g. single-peak, double-peak), we defined a parsimonious SIR-like model with two possible values for intrinsic transmissibility across three epochs. We fitted the parameters of this model to data from all 50 MPZs, finding them to be reasonably well clustered with a median (mean) value of 1.39 (1.57) and standard deviation of 0.41. An increasing temporal trend in transmissibility (, p-value: 0.013) during the period of our study was robust to the removal of high transmissibility outliers and to the removal of the smaller 20 MPZs. Our results demonstrate the utility of rapidly available – and consistent – data from multiple populations. PMID:23696723
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Vrancken, Bram; Suchard, Marc A; Lemey, Philippe
2017-07-01
Analyses of virus evolution in known transmission chains have the potential to elucidate the impact of transmission dynamics on the viral evolutionary rate and its difference within and between hosts. Lin et al. (2015, Journal of Virology , 89/7: 3512-22) recently investigated the evolutionary history of hepatitis B virus in a transmission chain and postulated that the 'colonization-adaptation-transmission' model can explain the differential impact of transmission on synonymous and non-synonymous substitution rates. Here, we revisit this dataset using a full probabilistic Bayesian phylogenetic framework that adequately accounts for the non-independence of sequence data when estimating evolutionary parameters. Examination of the transmission chain data under a flexible coalescent prior reveals a general inconsistency between the estimated timings and clustering patterns and the known transmission history, highlighting the need to incorporate host transmission information in the analysis. Using an explicit genealogical transmission chain model, we find strong support for a transmission-associated decrease of the overall evolutionary rate. However, in contrast to the initially reported larger transmission effect on non-synonymous substitution rate, we find a similar decrease in both non-synonymous and synonymous substitution rates that cannot be adequately explained by the colonization-adaptation-transmission model. An alternative explanation may involve a transmission/establishment advantage of hepatitis B virus variants that have accumulated fewer within-host substitutions, perhaps by spending more time in the covalently closed circular DNA state between each round of viral replication. More generally, this study illustrates that ignoring phylogenetic relationships can lead to misleading evolutionary estimates.
Ong, Jue-Sheng; Hwang, Liang-Dar; Cuellar-Partida, Gabriel; Martin, Nicholas G; Chenevix-Trench, Georgia; Quinn, Michael C J; Cornelis, Marilyn C; Gharahkhani, Puya; Webb, Penelope M; MacGregor, Stuart
2018-04-01
Coffee consumption has been shown to be associated with various health outcomes in observational studies. However, evidence for its association with epithelial ovarian cancer (EOC) is inconsistent and it is unclear whether these associations are causal. We used single nucleotide polymorphisms associated with (i) coffee and (ii) caffeine consumption to perform Mendelian randomization (MR) on EOC risk. We conducted a two-sample MR using genetic data on 44 062 individuals of European ancestry from the Ovarian Cancer Association Consortium (OCAC), and combined instrumental variable estimates using a Wald-type ratio estimator. For all EOC cases, the causal odds ratio (COR) for genetically predicted consumption of one additional cup of coffee per day was 0.92 [95% confidence interval (CI): 0.79, 1.06]. The COR was 0.90 (95% CI: 0.73, 1.10) for high-grade serous EOC. The COR for genetically predicted consumption of an additional 80 mg caffeine was 1.01 (95% CI: 0.92, 1.11) for all EOC cases and 0.90 (95% CI: 0.73, 1.10) for high-grade serous cases. We found no evidence indicative of a strong association between EOC risk and genetically predicted coffee or caffeine levels. However, our estimates were not statistically inconsistent with earlier observational studies and we were unable to rule out small protective associations.
Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin
2012-01-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561
Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin
2011-09-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.
NASA Astrophysics Data System (ADS)
Tong, M.; Xue, M.
2006-12-01
An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.
An Interoperability Consideration in Selecting Domain Parameters for Elliptic Curve Cryptography
NASA Technical Reports Server (NTRS)
Ivancic, Will (Technical Monitor); Eddy, Wesley M.
2005-01-01
Elliptic curve cryptography (ECC) will be an important technology for electronic privacy and authentication in the near future. There are many published specifications for elliptic curve cryptosystems, most of which contain detailed descriptions of the process for the selection of domain parameters. Selecting strong domain parameters ensures that the cryptosystem is robust to attacks. Due to a limitation in several published algorithms for doubling points on elliptic curves, some ECC implementations may produce incorrect, inconsistent, and incompatible results if domain parameters are not carefully chosen under a criterion that we describe. Few documents specify the addition or doubling of points in such a manner as to avoid this problematic situation. The safety criterion we present is not listed in any ECC specification we are aware of, although several other guidelines for domain selection are discussed in the literature. We provide a simple example of how a set of domain parameters not meeting this criterion can produce catastrophic results, and outline a simple means of testing curve parameters for interoperable safety over doubling.
Attack Detection in Sensor Network Target Localization Systems With Quantized Data
NASA Astrophysics Data System (ADS)
Zhang, Jiangfan; Wang, Xiaodong; Blum, Rick S.; Kaplan, Lance M.
2018-04-01
We consider a sensor network focused on target localization, where sensors measure the signal strength emitted from the target. Each measurement is quantized to one bit and sent to the fusion center. A general attack is considered at some sensors that attempts to cause the fusion center to produce an inaccurate estimation of the target location with a large mean-square-error. The attack is a combination of man-in-the-middle, hacking, and spoofing attacks that can effectively change both signals going into and coming out of the sensor nodes in a realistic manner. We show that the essential effect of attacks is to alter the estimated distance between the target and each attacked sensor to a different extent, giving rise to a geometric inconsistency among the attacked and unattacked sensors. Hence, with the help of two secure sensors, a class of detectors are proposed to detect the attacked sensors by scrutinizing the existence of the geometric inconsistency. We show that the false alarm and miss probabilities of the proposed detectors decrease exponentially as the number of measurement samples increases, which implies that for sufficiently large number of samples, the proposed detectors can identify the attacked and unattacked sensors with any required accuracy.
Braun, Fabian; Proença, Martin; Adler, Andy; Riedel, Thomas; Thiran, Jean-Philippe; Solà, Josep
2018-01-01
Cardiac output (CO) and stroke volume (SV) are parameters of key clinical interest. Many techniques exist to measure CO and SV, but are either invasive or insufficiently accurate in clinical settings. Electrical impedance tomography (EIT) has been suggested as a noninvasive measure of SV, but inconsistent results have been reported. Our goal is to determine the accuracy and reliability of EIT-based SV measurements, and whether advanced image reconstruction approaches can help to improve the estimates. Data were collected on ten healthy volunteers undergoing postural changes and exercise. To overcome the sensitivity to heart displacement and thorax morphology reported in previous work, we used a 3D EIT configuration with 2 planes of 16 electrodes and subject-specific reconstruction models. Various EIT-derived SV estimates were compared to reference measurements derived from the oxygen uptake. Results revealed a dramatic impact of posture on the EIT images. Therefore, the analysis was restricted to measurements in supine position under controlled conditions (low noise and stable heart and lung regions). In these measurements, amplitudes of impedance changes in the heart and lung regions could successfully be derived from EIT using ECG gating. However, despite a subject-specific calibration the heart-related estimates showed an error of 0.0 ± 15.2 mL for absolute SV estimation. For trending of relative SV changes, a concordance rate of 80.9% and an angular error of -1.0 ± 23.0° were obtained. These performances are insufficient for most clinical uses. Similar conclusions were derived from lung-related estimates. Our findings indicate that the key difficulty in EIT-based SV monitoring is that purely amplitude-based features are strongly influenced by other factors (such as posture, electrode contact impedance and lung or heart conductivity). All the data of the present study are made publicly available for further investigations.
Proença, Martin; Adler, Andy; Riedel, Thomas; Thiran, Jean-Philippe; Solà, Josep
2018-01-01
Cardiac output (CO) and stroke volume (SV) are parameters of key clinical interest. Many techniques exist to measure CO and SV, but are either invasive or insufficiently accurate in clinical settings. Electrical impedance tomography (EIT) has been suggested as a noninvasive measure of SV, but inconsistent results have been reported. Our goal is to determine the accuracy and reliability of EIT-based SV measurements, and whether advanced image reconstruction approaches can help to improve the estimates. Data were collected on ten healthy volunteers undergoing postural changes and exercise. To overcome the sensitivity to heart displacement and thorax morphology reported in previous work, we used a 3D EIT configuration with 2 planes of 16 electrodes and subject-specific reconstruction models. Various EIT-derived SV estimates were compared to reference measurements derived from the oxygen uptake. Results revealed a dramatic impact of posture on the EIT images. Therefore, the analysis was restricted to measurements in supine position under controlled conditions (low noise and stable heart and lung regions). In these measurements, amplitudes of impedance changes in the heart and lung regions could successfully be derived from EIT using ECG gating. However, despite a subject-specific calibration the heart-related estimates showed an error of 0.0 ± 15.2 mL for absolute SV estimation. For trending of relative SV changes, a concordance rate of 80.9% and an angular error of −1.0 ± 23.0° were obtained. These performances are insufficient for most clinical uses. Similar conclusions were derived from lung-related estimates. Our findings indicate that the key difficulty in EIT-based SV monitoring is that purely amplitude-based features are strongly influenced by other factors (such as posture, electrode contact impedance and lung or heart conductivity). All the data of the present study are made publicly available for further investigations. PMID:29373611
Challenges in Species Tree Estimation Under the Multispecies Coalescent Model
Xu, Bo; Yang, Ziheng
2016-01-01
The multispecies coalescent (MSC) model has emerged as a powerful framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. A number of methods have been developed in the past few years to estimate the species tree under the MSC. The full likelihood methods (including maximum likelihood and Bayesian inference) average over the unknown gene trees and accommodate their uncertainties properly but involve intensive computation. The approximate or summary coalescent methods are computationally fast and are applicable to genomic datasets with thousands of loci, but do not make an efficient use of information in the multilocus data. Most of them take the two-step approach of reconstructing the gene trees for multiple loci by phylogenetic methods and then treating the estimated gene trees as observed data, without accounting for their uncertainties appropriately. In this article we review the statistical nature of the species tree estimation problem under the MSC, and explore the conceptual issues and challenges of species tree estimation by focusing mainly on simple cases of three or four closely related species. We use mathematical analysis and computer simulation to demonstrate that large differences in statistical performance may exist between the two classes of methods. We illustrate that several counterintuitive behaviors may occur with the summary methods but they are due to inefficient use of information in the data by summary methods and vanish when the data are analyzed using full-likelihood methods. These include (i) unidentifiability of parameters in the model, (ii) inconsistency in the so-called anomaly zone, (iii) singularity on the likelihood surface, and (iv) deterioration of performance upon addition of more data. We discuss the challenges and strategies of species tree inference for distantly related species when the molecular clock is violated, and highlight the need for improving the computational efficiency and model realism of the likelihood methods as well as the statistical efficiency of the summary methods. PMID:27927902
Augustine, Adam A; Hemenover, Scott H
2013-05-01
In their examination of the effectiveness of affect regulation strategies, Webb, Miles, and Sheeran (2012) offered the results of a broad meta-analysis of studies on regulatory interventions. Their analysis provides an alternative to our earlier, more focused meta-analysis of the affect regulation literature (Augustine & Hemenover, 2009). Unfortunately, there are a number of errors and omissions in this new meta-analysis that could lead to misconceptions regarding both our previous work and the state of the affect regulation literature. In this comment, we examine the impact of methodological issues, inconsistent inclusion criteria, variance in manipulations, and what we perceive to be a subjective and inconsistent selection of effect sizes on the accuracy and generalizability of Webb and colleagues' estimates of affect regulation strategy effectiveness. PsycINFO Database Record (c) 2013 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Dugger, A. L.; Rafieeinasab, A.; Gochis, D.; Yu, W.; McCreight, J. L.; Karsten, L. R.; Pan, L.; Zhang, Y.; Sampson, K. M.; Cosgrove, B.
2016-12-01
Evaluation of physically-based hydrologic models applied across large regions can provide insight into dominant controls on runoff generation and how these controls vary based on climatic, biological, and geophysical setting. To make this leap, however, we need to combine knowledge of regional forcing skill, model parameter and physics assumptions, and hydrologic theory. If we can successfully do this, we also gain information on how well our current approximations of these dominant physical processes are represented in continental-scale models. In this study, we apply this diagnostic approach to a 5-year retrospective implementation of the WRF-Hydro community model configured for the U.S. National Weather Service's National Water Model (NWM). The NWM is a water prediction model in operations over the contiguous U.S. as of summer 2016, providing real-time estimates and forecasts out to 30 days of streamflow across 2.7 million stream reaches as well as distributed snowpack, soil moisture, and evapotranspiration at 1-km resolution. The WRF-Hydro system permits not only the standard simulation of vertical energy and water fluxes common in continental-scale models, but augments these processes with lateral redistribution of surface and subsurface water, simple groundwater dynamics, and channel routing. We evaluate 5 years of NLDAS-2 precipitation forcing and WRF-Hydro streamflow and evapotranspiration simulation across the contiguous U.S. at a range of spatial (gage, basin, ecoregion) and temporal (hourly, daily, monthly) scales and look for consistencies and inconsistencies in performance in terms of bias, timing, and extremes. Leveraging results from other CONUS-scale hydrologic evaluation studies, we translate our performance metrics into a matrix of likely dominant process controls and error sources (forcings, parameter estimates, and model physics). We test our hypotheses in a series of controlled model experiments on a subset of representative basins from distinct "problem" environments (Southeast U.S. Coastal Plain, Central and Coastal Texas, Northern Plains, and Arid Southwest). The results from these longer-term model diagnostics will inform future improvements in forcing bias correction, parameter calibration, and physics developments in the National Water Model.
Parameter estimation with Sandage-Loeb test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Jia-Jia; Zhang, Jing-Fei; Zhang, Xin, E-mail: gengjiajia163@163.com, E-mail: jfzhang@mail.neu.edu.cn, E-mail: zhangxin@mail.neu.edu.cn
2014-12-01
The Sandage-Loeb (SL) test directly measures the expansion rate of the universe in the redshift range of 2 ∼< z ∼< 5 by detecting redshift drift in the spectra of Lyman-α forest of distant quasars. We discuss the impact of the future SL test data on parameter estimation for the ΛCDM, the wCDM, and the w{sub 0}w{sub a}CDM models. To avoid the potential inconsistency with other observational data, we take the best-fitting dark energy model constrained by the current observations as the fiducial model to produce 30 mock SL test data. The SL test data provide an important supplement to the other dark energymore » probes, since they are extremely helpful in breaking the existing parameter degeneracies. We show that the strong degeneracy between Ω{sub m} and H{sub 0} in all the three dark energy models is well broken by the SL test. Compared to the current combined data of type Ia supernovae, baryon acoustic oscillation, cosmic microwave background, and Hubble constant, the 30-yr observation of SL test could improve the constraints on Ω{sub m} and H{sub 0} by more than 60% for all the three models. But the SL test can only moderately improve the constraint on the equation of state of dark energy. We show that a 30-yr observation of SL test could help improve the constraint on constant w by about 25%, and improve the constraints on w{sub 0} and w{sub a} by about 20% and 15%, respectively. We also quantify the constraining power of the SL test in the future high-precision joint geometric constraints on dark energy. The mock future supernova and baryon acoustic oscillation data are simulated based on the space-based project JDEM. We find that the 30-yr observation of SL test would help improve the measurement precision of Ω{sub m}, H{sub 0}, and w{sub a} by more than 70%, 20%, and 60%, respectively, for the w{sub 0}w{sub a}CDM model.« less
Assessment of predictive models for chlorophyll-a concentration of a tropical lake
2011-01-01
Background This study assesses four predictive ecological models; Fuzzy Logic (FL), Recurrent Artificial Neural Network (RANN), Hybrid Evolutionary Algorithm (HEA) and multiple linear regressions (MLR) to forecast chlorophyll- a concentration using limnological data from 2001 through 2004 of unstratified shallow, oligotrophic to mesotrophic tropical Putrajaya Lake (Malaysia). Performances of the models are assessed using Root Mean Square Error (RMSE), correlation coefficient (r), and Area under the Receiving Operating Characteristic (ROC) curve (AUC). Chlorophyll-a have been used to estimate algal biomass in aquatic ecosystem as it is common in most algae. Algal biomass indicates of the trophic status of a water body. Chlorophyll- a therefore, is an effective indicator for monitoring eutrophication which is a common problem of lakes and reservoirs all over the world. Assessments of these predictive models are necessary towards developing a reliable algorithm to estimate chlorophyll- a concentration for eutrophication management of tropical lakes. Results Same data set was used for models development and the data was divided into two sets; training and testing to avoid biasness in results. FL and RANN models were developed using parameters selected through sensitivity analysis. The selected variables were water temperature, pH, dissolved oxygen, ammonia nitrogen, nitrate nitrogen and Secchi depth. Dissolved oxygen, selected through stepwise procedure, was used to develop the MLR model. HEA model used parameters selected using genetic algorithm (GA). The selected parameters were pH, Secchi depth, dissolved oxygen and nitrate nitrogen. RMSE, r, and AUC values for MLR model were (4.60, 0.5, and 0.76), FL model were (4.49, 0.6, and 0.84), RANN model were (4.28, 0.7, and 0.79) and HEA model were (4.27, 0.7, and 0.82) respectively. Performance inconsistencies between four models in terms of performance criteria in this study resulted from the methodology used in measuring the performance. RMSE is based on the level of error of prediction whereas AUC is based on binary classification task. Conclusions Overall, HEA produced the best performance in terms of RMSE, r, and AUC values. This was followed by FL, RANN, and MLR. PMID:22372859
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
Correlates of individual, and age-related, differences in short-term learning.
Zhang, Zhiyong; Davis, Hasker P; Salthouse, Timothy A; Tucker-Drob, Elliot M
2007-07-01
Latent growth models were applied to data on multitrial verbal and spatial learning tasks from two independent studies. Although significant individual differences in both initial level of performance and subsequent learning were found in both tasks, age differences were found only in mean initial level, and not in mean learning. In neither task was fluid or crystallized intelligence associated with learning. Although there were moderate correlations among the level parameters across the verbal and spatial tasks, the learning parameters were not significantly correlated with one another across task modalities. These results are inconsistent with the existence of a general (e.g., material-independent) learning ability.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Kinematic modeling of a 7-degree of freedom spatial hybrid manipulator for medical surgery.
Singh, Amanpreet; Singla, Ekta; Soni, Sanjeev; Singla, Ashish
2018-01-01
The prime objective of this work is to deal with the kinematics of spatial hybrid manipulators. In this direction, in 1955, Denavit and Hartenberg proposed a consistent and concise method, known as D-H parameters method, to deal with kinematics of open serial chains. From literature review, it is found that D-H parameter method is widely used to model manipulators consisting of lower pairs. However, the method leads to ambiguities when applied to closed-loop, tree-like and hybrid manipulators. Furthermore, in the dearth of any direct method to model closed-loop, tree-like and hybrid manipulators, revisions of this method have been proposed from time-to-time by different researchers. One such kind of revision using the concept of dummy frames has successfully been proposed and implemented by the authors on spatial hybrid manipulators. In that work, authors have addressed the orientational inconsistency of the D-H parameter method, restricted to body-attached frames only. In the current work, the condition of body-attached frames is relaxed and spatial frame attachment is considered to derive the kinematic model of a 7-degree of freedom spatial hybrid robotic arm, along with the development of closed-loop constraints. The validation of the new kinematic model has been performed with the help of a prototype of this 7-degree of freedom arm, which is being developed at Council of Scientific & Industrial Research-Central Scientific Instruments Organisation Chandigarh to aid the surgeon during a medical surgical task. Furthermore, the developed kinematic model is used to develop the first column of the Jacobian matrix, which helps in providing the estimate of the tip velocity of the 7-degree of freedom manipulator when the first joint velocity is known.
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Li, Yandong; Chang, Ching-Fu; Tan, Benjamin; Chen, Ziyang; Sege, Jon; Wang, Changhong; Rubin, Yoram
2018-01-01
Modeling of uncertainty associated with subsurface dynamics has long been a major research topic. Its significance is widely recognized for real-life applications. Despite the huge effort invested in the area, major obstacles still remain on the way from theory and applications. Particularly problematic here is the confusion between modeling uncertainty and modeling spatial variability, which translates into a (mis)conception, in fact an inconsistency, in that it suggests that modeling of uncertainty and modeling of spatial variability are equivalent, and as such, requiring a lot of data. This paper investigates this challenge against the backdrop of a 7 km, deep underground tunnel in China, where environmental impacts are of major concern. We approach the data challenge by pursuing a new concept for Rapid Impact Modeling (RIM), which bypasses altogether the need to estimate posterior distributions of model parameters, focusing instead on detailed stochastic modeling of impacts, conditional to all information available, including prior, ex-situ information and in-situ measurements as well. A foundational element of RIM is the construction of informative priors for target parameters using ex-situ data, relying on ensembles of well-documented sites, pre-screened for geological and hydrological similarity to the target site. The ensembles are built around two sets of similarity criteria: a physically-based set of criteria and an additional set covering epistemic criteria. In another variation to common Bayesian practice, we update the priors to obtain conditional distributions of the target (environmental impact) dependent variables and not the hydrological variables. This recognizes that goal-oriented site characterization is in many cases more useful in applications compared to parameter-oriented characterization.
Garcia-Hermoso, A; Agostinis-Sobrinho, C; Mota, J; Santos, R M; Correa-Bautista, J E; Ramírez-Vélez, R
2017-06-01
Studies in the paediatric population have shown inconsistent associations between cardiorespiratory fitness and inflammation independently of adiposity. The purpose of this study was (i) to analyse the combined association of cardiorespiratory fitness and adiposity with high-sensitivity C-reactive protein (hs-CRP), and (ii) to determine whether adiposity acts as a mediator on the association between cardiorespiratory fitness and hs-CRP in children and adolescents. This cross-sectional study included 935 (54.7% girls) healthy children and adolescents from Bogotá, Colombia. The 20 m shuttle run test was used to estimate cardiorespiratory fitness. We assessed the following adiposity parameters: body mass index, waist circumference, and fat mass index and the sum of subscapular and triceps skinfold thickness. High sensitivity assays were used to obtain hs-CRP. Linear regression models were fitted for mediation analyses examined whether the association between cardiorespiratory fitness and hs-CRP was mediated by each of adiposity parameters according to Baron and Kenny procedures. Lower levels of hs-CRP were associated with the best schoolchildren profiles (high cardiorespiratory fitness + low adiposity) (p for trend <0.001 in the four adiposity parameters), compared with unfit and overweight (low cardiorespiratory fitness + high adiposity) counterparts. Linear regression models suggest a full mediation of adiposity on the association between cardiorespiratory fitness and hs-CRP levels. Our findings seem to emphasize the importance of obesity prevention in childhood, suggesting that having high levels of cardiorespiratory fitness may not counteract the negative consequences ascribed to adiposity on hs-CRP. Copyright © 2017 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.
Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luczak, Marcin; Dziedziech, Kajetan; Peeters, Bart
2010-05-28
The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters...) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring,more » load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.« less
NASA Astrophysics Data System (ADS)
Laha, Sibasish; Guainazzi, Matteo; Dewangan, Gulab C.; Chakravorty, Susmita; Kembhavi, Ajit K.
2014-07-01
We present results from a homogeneous analysis of the broad-band 0.3-10 keV CCD resolution as well as of the soft X-ray high-resolution grating spectra of a hard X-ray flux-limited sample of 26 Seyfert galaxies observed with XMM-Newton. Our goal is to characterize warm absorbers (WAs) along the line of sight to the active nucleus. We significantly detect WAs in 65 per cent of the sample sources. Our results are consistent with WAs being present in at least half of the Seyfert galaxies in the nearby Universe, in agreement with previous estimates. We find a gap in the distribution of the ionization parameter in the range 0.5 < log ξ < 1.5 which we interpret as a thermally unstable region for WA clouds. This may indicate that the WA flow is probably constituted by a clumpy distribution of discrete clouds rather than a continuous medium. The distribution of the WA column densities for the sources with broad Fe Kα lines are similar to those sources which do not have broadened emission lines. Therefore, the detected broad Fe Kα emission lines are bona fide and not artefacts of ionized absorption in the soft X-rays. The WA parameters show no correlation among themselves, with the exception of the ionization parameter versus column density. The shallow slope of the log ξ versus log vout linear regression (0.12 ± 0.03) is inconsistent with the scaling laws predicted by radiation or magnetohydrodynamic-driven winds. Our results also suggest that WA and ultra fast outflows do not represent extreme manifestation of the same astrophysical system.
Contact Versus Non-Contact Measurement of a Helicopter Main Rotor Composite Blade
NASA Astrophysics Data System (ADS)
Luczak, Marcin; Dziedziech, Kajetan; Vivolo, Marianna; Desmet, Wim; Peeters, Bart; Van der Auweraer, Herman
2010-05-01
The dynamic characterization of lightweight structures is particularly complex as the impact of the weight of sensors and instrumentation (cables, mounting of exciters…) can distort the results. Varying mass loading or constraint effects between partial measurements may determine several errors on the final conclusions. Frequency shifts can lead to erroneous interpretations of the dynamics parameters. Typically these errors remain limited to a few percent. Inconsistent data sets however can result in major processing errors, with all related consequences towards applications based on the consistency assumption, such as global modal parameter identification, model-based damage detection and FRF-based matrix inversion in substructuring, load identification and transfer path analysis [1]. This paper addresses the subject of accuracy in the context of the measurement of the dynamic properties of a particular lightweight structure. It presents a comprehensive comparative study between the use of accelerometer, laser vibrometer (scanning LDV) and PU-probe (acoustic particle velocity and pressure) measurements to measure the structural responses, with as final aim the comparison of modal model quality assessment. The object of the investigation is a composite material blade from the main rotor of a helicopter. The presented results are part of an extensive test campaign performed with application of SIMO, MIMO, random and harmonic excitation, and the use of the mentioned contact and non-contact measurement techniques. The advantages and disadvantages of the applied instrumentation are discussed. Presented are real-life measurement problems related to the different set up conditions. Finally an analysis of estimated models is made in view of assessing the applicability of the various measurement approaches for successful fault detection based on modal parameters observation as well as in uncertain non-deterministic numerical model updating.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
Worldwide Historical Estimates of Leaf Area Index, 1932-2000
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scurlock, JMO
2002-02-06
Approximately 1000 published estimates of leaf area index (LAI) from nearly 400 unique field sites, covering the period 1932-2000, have been compiled into a single data set. LA1 is a key parameter for global and regional models of biosphere/atmosphere exchange of carbon dioxide, water vapor, and other materials. It also plays an integral role in determining the energy balance of the land surface. This data set provides a benchmark of typical values and ranges of LA1 for a variety of biomes and land cover types, in support of model development and validation of satellite-derived remote sensing estimates of LA1 andmore » other vegetation parameters. The LA1 data are linked to a bibliography of over 300 original source references. These historic LA1 data are mostly from natural and seminatural (managed) ecosystems, although some agricultural estimates are also included. Although methodologies for determining LA1 have changed over the decades, it is useful to represent the inconsistencies (e.g., in maximum value reported for a particular biome) that are actually found in the scientific literature. Needleleaf (coniferous) forests are by far the most commonly measured biome/land cover types in this compilation, with 22% of the measurements from temperate evergreen needleleaf forests, and boreal evergreen needleleaf forests and crops the next most common (about 9% each). About 40% of the records in the data set were published in the past 10 years (1991-2000), with a further 20% collected between 1981 and 1990. Mean LAI ({+-} standard deviation), distributed between 15 biome/land cover classes, ranged from 1.31 {+-} 0.85 for deserts to 8.72 {+-} 4.32 for tree plantations, with evergreen forests (needleleaf and broadleaf) displaying the highest LA1 among the natural terrestrial vegetation classes. We have identified statistical outliers in this data set, both globally and according to the different biome/land cover classes, but despite some decreases in mean LA1 values reported, our overall conclusions remained the same. This report documents the development of this data set, its contents, and its availability on the Internet from the Oak Ridge National Laboratory Distributed Active Archive Center for Biogeochemical Dynamics. Caution is advised in using these data, which were collected using a wide range of methodologies and assumptions that may not allow comparisons among sites.« less
NASA Technical Reports Server (NTRS)
Mattson, D. L.
1975-01-01
The effect of prolonged angular acceleration on choice reaction time to an accelerating visual stimulus was investigated, with 10 commercial airline pilots serving as subjects. The pattern of reaction times during and following acceleration was compared with the pattern of velocity estimates reported during identical trials. Both reaction times and velocity estimates increased at the onset of acceleration, declined prior to the termination of acceleration, and showed an aftereffect. These results are inconsistent with the torsion-pendulum theory of semicircular canal function and suggest that the vestibular adaptation is of central origin.
Contraceptive failure in the United States
Trussell, James
2013-01-01
This review provides an update of previous estimates of first-year probabilities of contraceptive failure for all methods of contraception available in the United States. Estimates are provided of probabilities of failure during typical use (which includes both incorrect and inconsistent use) and during perfect use (correct and consistent use). The difference between these two probabilities reveals the consequences of imperfect use; it depends both on how unforgiving of imperfect use a method is and on how hard it is to use that method perfectly. These revisions reflect new research on contraceptive failure both during perfect use and during typical use. PMID:21477680
Argento, Elena; Shannon, Kate; Nguyen, Paul; Dobrer, Sabina; Chettiar, Jill; Deering, Kathleen N.
2015-01-01
Background Despite high HIV burden among sex workers (SWs) globally, and relatively high prevalence of client condom use, research on potential HIV/STI risk pathways of intimate partnerships is limited. This study investigated partner/dyad-level factors associated with inconsistent condom use among SWs with intimate partners in Vancouver, Canada. Methods Baseline data (2010–2013) were drawn from a community-based prospective cohort of women SWs. Multivariable generalized estimating equations logistic regression examined dyad-level factors associated with inconsistent condom use (<100% in last six months) with up to three male intimate partners per SW. Adjusted odds ratios and 95% confidence intervals were reported (AOR[95%CI]). Results Overall, 369 SWs reported having at least one intimate partner, with 70.1% reporting inconsistent condom use. Median length of partnerships was 1.8 years, with longer duration linked to inconsistent condom use. In multivariable analysis, dyad factors significantly associated with increased odds of inconsistent condom use included: having a cohabiting (5.43[2.53–11.66]) or non-cohabiting intimate partner (2.15[1.11–4.19]) (versus casual partner), providing drugs (3.04[1.47–6.30]) or financial support to an intimate partner (2.46[1.05–5.74]), physical intimate partner violence (2.20[1.17–4.12]), and an intimate partner providing physical safety (2.08[1.11–3.91]); non-injection drug use was associated with a 68% reduced odds (0.32[0.17–0.60]). Conclusions Our study highlights the complex role of dyad-level factors in shaping sexual and drug-related HIV/STI risk pathways for SWs from intimate partners. Couple and gender-focused interventions efforts are needed to reduce HIV/STI risks to SWs through intimate partnerships. This research supports further calls for integrated violence and HIV prevention within broader sexual/reproductive health efforts for SWs. PMID:26585612
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Zertuche, L. Magaña; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Boyle, M.; Campanelli, M.; Chu, T.; Clark, M.; Fauchon-Jones, E.; Fong, H.; Healy, J.; Hemberger, D.; Hinder, I.; Husa, S.; Kalaghati, C.; Khan, S.; Kidder, L. E.; Kinsey, M.; Laguna, P.; London, L. T.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pannarale, F.; Pfeiffer, H. P.; Scheel, M.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Vinuales, A. Vano; Zlochower, Y.; LIGO Scientific Collaboration; Virgo Collaboration
2016-09-01
We compare GW150914 directly to simulations of coalescing binary black holes in full general relativity, including several performed specifically to reproduce this event. Our calculations go beyond existing semianalytic models, because for all simulations—including sources with two independent, precessing spins—we perform comparisons which account for all the spin-weighted quadrupolar modes, and separately which account for all the quadrupolar and octopolar modes. Consistent with the posterior distributions reported by Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016)] (at the 90% credible level), we find the data are compatible with a wide range of nonprecessing and precessing simulations. Follow-up simulations performed using previously estimated binary parameters most resemble the data, even when all quadrupolar and octopolar modes are included. Comparisons including only the quadrupolar modes constrain the total redshifted mass Mz∈[64 M⊙-82 M⊙] , mass ratio 1 /q =m2/m1∈[0.6 ,1 ], and effective aligned spin χeff∈[-0.3 ,0.2 ], where χeff=(S1/m1+S2/m2).L ^/M . Including both quadrupolar and octopolar modes, we find the mass ratio is even more tightly constrained. Even accounting for precession, simulations with extreme mass ratios and effective spins are highly inconsistent with the data, at any mass. Several nonprecessing and precessing simulations with similar mass ratio and χeff are consistent with the data. Though correlated, the components' spins (both in magnitude and directions) are not significantly constrained by the data: the data is consistent with simulations with component spin magnitudes a1 ,2 up to at least 0.8, with random orientations. Further detailed follow-up calculations are needed to determine if the data contain a weak imprint from transverse (precessing) spins. For nonprecessing binaries, interpolating between simulations, we reconstruct a posterior distribution consistent with previous results. The final black hole's redshifted mass is consistent with Mf ,z in the range 64.0 M⊙-73.5 M⊙ and the final black hole's dimensionless spin parameter is consistent with af=0.62 - 0.73 . As our approach invokes no intermediate approximations to general relativity and can strongly reject binaries whose radiation is inconsistent with the data, our analysis provides a valuable complement to Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016)].
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
ERIC Educational Resources Information Center
Rathod, Sujit D.; Minnis, Alexandra M.; Subbiah, Kalyani; Krishnan, Suneeta
2011-01-01
Background: Audio computer-assisted self-interviews (ACASI) are increasingly used in health research to improve the accuracy of data on sensitive behaviors. However, evidence is limited on its use among low-income populations in countries like India and for measurement of sensitive issues such as domestic violence. Method: We compared reports of…
Estimating nonrigid motion from inconsistent intensity with robust shape features
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Ruan, Dan, E-mail: druan@mednet.ucla.edu; Department of Radiation Oncology, University of California, Los Angeles, California 90095
2013-12-15
Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, andmore » regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. Conclusions: The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.« less
Extensive Acclimation in Ectotherms Conceals Interspecific Variation in Thermal Tolerance Limits
Pintor, Anna F. V.; Schwarzkopf, Lin; Krockenberger, Andrew K.
2016-01-01
Species’ tolerance limits determine their capacity to tolerate climatic extremes and limit their potential distributions. Interspecific variation in thermal tolerances is often proposed to indicate climatic vulnerability and is, therefore, the subject of many recent meta-studies on differential capacities of species from climatically different habitats to deal with climate change. Most studies on thermal tolerances do not acclimate animals or use inconsistent, and insufficient, acclimation times, limiting our knowledge of the shape, duration and extent of acclimation responses. Consequently patterns in thermal tolerances observed in meta-analyses, based on data from the literature are based on inconsistent, partial acclimation and true trends may be obscured. In this study we describe time-course of complete acclimation of critical thermal minima in the tropical ectotherm Carlia longipes and compare it to the average acclimation response of other reptiles, estimated from published data, to assess how much acclimation time may contribute to observed differences in thermal limits. Carlia longipes decreased their lower critical thermal limits by 2.4°C and completed 95% of acclimation in 17 weeks. Wild populations did not mirror this acclimation process over the winter. Other reptiles appear to decrease cold tolerance more quickly (95% in 7 weeks) and to a greater extent, with an estimated average acclimation response of 6.1°C. However, without data on tolerances after longer acclimation times available, our capacity to estimate final acclimation state is very limited. Based on the subset of data available for meta-analysis, much of the variation in cold tolerance observed in the literature can be attributed to acclimation time. Our results indicate that (i) acclimation responses can be slow and substantial, even in tropical species, and (ii) interspecific differences in acclimation speed and extent may obscure trends assessed in some meta-studies. Cold tolerances of wild animals are representative of cumulative responses to recent environments, while lengthy acclimation is necessary for controlled comparisons of physiological tolerances. Measures of inconsistent, intermediate acclimation states, as reported by many studies, represent neither the realised nor the potential tolerance in that population, are very likely underestimates of species’ physiological capacities and may consequently be of limited value. PMID:26990769
Estimating nonrigid motion from inconsistent intensity with robust shape features.
Liu, Wenyang; Ruan, Dan
2013-12-01
To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.
Extensive Acclimation in Ectotherms Conceals Interspecific Variation in Thermal Tolerance Limits.
Pintor, Anna F V; Schwarzkopf, Lin; Krockenberger, Andrew K
2016-01-01
Species' tolerance limits determine their capacity to tolerate climatic extremes and limit their potential distributions. Interspecific variation in thermal tolerances is often proposed to indicate climatic vulnerability and is, therefore, the subject of many recent meta-studies on differential capacities of species from climatically different habitats to deal with climate change. Most studies on thermal tolerances do not acclimate animals or use inconsistent, and insufficient, acclimation times, limiting our knowledge of the shape, duration and extent of acclimation responses. Consequently patterns in thermal tolerances observed in meta-analyses, based on data from the literature are based on inconsistent, partial acclimation and true trends may be obscured. In this study we describe time-course of complete acclimation of critical thermal minima in the tropical ectotherm Carlia longipes and compare it to the average acclimation response of other reptiles, estimated from published data, to assess how much acclimation time may contribute to observed differences in thermal limits. Carlia longipes decreased their lower critical thermal limits by 2.4°C and completed 95% of acclimation in 17 weeks. Wild populations did not mirror this acclimation process over the winter. Other reptiles appear to decrease cold tolerance more quickly (95% in 7 weeks) and to a greater extent, with an estimated average acclimation response of 6.1°C. However, without data on tolerances after longer acclimation times available, our capacity to estimate final acclimation state is very limited. Based on the subset of data available for meta-analysis, much of the variation in cold tolerance observed in the literature can be attributed to acclimation time. Our results indicate that (i) acclimation responses can be slow and substantial, even in tropical species, and (ii) interspecific differences in acclimation speed and extent may obscure trends assessed in some meta-studies. Cold tolerances of wild animals are representative of cumulative responses to recent environments, while lengthy acclimation is necessary for controlled comparisons of physiological tolerances. Measures of inconsistent, intermediate acclimation states, as reported by many studies, represent neither the realised nor the potential tolerance in that population, are very likely underestimates of species' physiological capacities and may consequently be of limited value.
Ryan, Richella; Booth, Sara; Spathis, Anna; Mollart, Sarah; Clow, Angela
2016-04-01
Dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis is associated with diverse adverse health outcomes, making it an important therapeutic target. Measurement of the diurnal rhythm of cortisol secretion provides a window into this system. At present, no guidelines exist for the optimal use of this biomarker within randomised controlled trials (RCTs). The aim of this study is to describe the ways in which salivary diurnal cortisol has been measured within RCTs of health or behavioural interventions in adults. Six electronic databases (up to May 21, 2015) were systematically searched for RCTs which used salivary diurnal cortisol as an outcome measure to evaluate health or behavioural interventions in adults. A narrative synthesis was undertaken of the findings in relation to salivary cortisol methodology and outcomes. From 78 studies that fulfilled the inclusion criteria, 30 included healthy participants (38.5 %), 27 included patients with physical disease (34.6 %) and 21 included patients with psychiatric disease (26.9 %). Psychological therapies were most commonly evaluated (n = 33, 42.3 %). There was substantial heterogeneity across studies in relation to saliva collection protocols and reported cortisol parameters. Only 39 studies (50 %) calculated a rhythm parameter such as the diurnal slope or the cortisol awakening response (CAR). Patterns of change in cortisol parameters were inconsistent both within and across studies and there was low agreement with clinical findings. Salivary diurnal cortisol is measured inconsistently across RCTs, which is limiting the interpretation of findings within and across studies. This indicates a need for more validation work, along with consensus guidelines.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Number of discernible object colors is a conundrum.
Masaoka, Kenichiro; Berns, Roy S; Fairchild, Mark D; Moghareh Abed, Farhad
2013-02-01
Widely varying estimates of the number of discernible object colors have been made by using various methods over the past 100 years. To clarify the source of the discrepancies in the previous, inconsistent estimates, the number of discernible object colors is estimated over a wide range of color temperatures and illuminance levels using several chromatic adaptation models, color spaces, and color difference limens. Efficient and accurate models are used to compute optimal-color solids and count the number of discernible colors. A comprehensive simulation reveals limitations in the ability of current color appearance models to estimate the number of discernible colors even if the color solid is smaller than the optimal-color solid. The estimates depend on the color appearance model, color space, and color difference limen used. The fundamental problem lies in the von Kries-type chromatic adaptation transforms, which have an unknown effect on the ranking of the number of discernible colors at different color temperatures.
Spits, Christine; Wallace, Luke; Reinke, Karin
2017-04-20
Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential.
Kinematic Measurement of Knee Prosthesis from Single-Plane Projection Images
NASA Astrophysics Data System (ADS)
Hirokawa, Shunji; Ariyoshi, Shogo; Takahashi, Kenji; Maruyama, Koichi
In this paper, the measurement of 3D motion from 2D perspective projections of knee prosthesis is described. The technique reported by Banks and Hodge was further developed in this study. The estimation was performed in two steps. The first-step estimation was performed on the assumption of orthogonal projection. Then, the second-step estimation was subsequently carried out based upon the perspective projection to accomplish more accurate estimation. The simulation results have demonstrated that the technique archived sufficient accuracies of position/orientation estimation for prosthetic kinematics. Then we applied our algorithm to the CCD images, thereby examining the influences of various artifacts, possibly incorporated through an imaging process, on the estimation accuracies. We found that accuracies in the experiment were influenced mainly by the geometric discrepancies between the prosthesis component and computer generated model and by the spacial inconsistencies between the coordinate axes of the positioner and that of the computer model. However, we verified that our algorithm could achieve proper and consistent estimation even for the CCD images.
Drinking-water disinfection by-products and semen quality: a cross-sectional study in China.
Zeng, Qiang; Wang, Yi-Xin; Xie, Shao-Hua; Xu, Liang; Chen, Yong-Zhe; Li, Min; Yue, Jing; Li, Yu-Feng; Liu, Ai-Lin; Lu, Wen-Qing
2014-07-01
Exposure to disinfection by-products (DBPs) has been demonstrated to impair male reproductive health in animals, but human evidence is limited and inconsistent. We examined the association between exposure to drinking-water DBPs and semen quality in a Chinese population. We recruited 2,009 men seeking semen analysis from the Reproductive Center of Tongji Hospital in Wuhan, China, between April 2011 and May 2012. Each man provided a semen sample and a urine sample. Semen samples were analyzed for sperm concentration, sperm motility, and sperm count. As a biomarker of exposure to drinking-water DBPs, trichloroacetic acid (TCAA) was measured in the urine samples. The mean (median) urinary TCAA concentration was 9.58 (7.97) μg/L (interquartile range, 6.01-10.96 μg/L). Compared with men with urine TCAA in the lowest quartile, increased adjusted odds ratios (ORs) were estimated for below-reference sperm concentration in men with TCAA in the second and fourth quartiles (OR = 1.79; 95% CI: 1.19, 2.69 and OR = 1.51; 95% CI: 0.98, 2.31, respectively), for below-reference sperm motility in men with TCAA in the second and third quartiles (OR = 1.46; 95% CI: 1.12, 1.90 and OR = 1.30; 95% CI: 1.00, 1.70, respectively), and for below-reference sperm count in men with TCAA in the second quartile (OR 1.62; 95% CI: 1.04, 2.55). Nonmonotonic associations with TCAA quartiles were also estimated for semen parameters modeled as continuous outcomes, although significant negative associations were estimated for all quartiles above the reference level for sperm motility. Our findings suggest that exposure to drinking-water DBPs may contribute to decreased semen quality in humans.
Aerodynamic analysis and simulation of a twin-tail tilt-duct unmanned aerial vehicle
NASA Astrophysics Data System (ADS)
Abdollahi, Cyrus
The tilt-duct vertical takeoff and landing (VTOL) concept has been around since the early 1960s; however, to date the design has never passed the research phase and development phase. Nearly 50 years later, American Dynamics Flight Systems (ADFS) is developing the AD-150, a 2,250lb weight class unmanned aerial vehicle (UAV) configured with rotating ducts on each wingtip. Unlike its predecessor, the Doak VZ-4, the AD-150 features a V tail and wing sweep -- both of which affect the aerodynamic behavior of the aircraft. Because no aircraft of this type has been built and tested, vital aerodynamic research was conducted on the bare airframe behavior (without wingtip ducts). Two weeks of static and dynamic testing were performed on a 3/10th scale model at the University of Maryland's 7' x 10' low speed wind tunnel to facilitate the construction of a nonlinear flight simulator. A total of 70 dynamic tests were performed to obtain damping parameter estimates using the ordinary least squares methodology. Validation, based on agreement between static and dynamic estimates of the pitch and yaw stiffness terms, showed an average percent error of 14.0% and 39.6%, respectively. These inconsistencies were attributed to: large dynamic displacements not encountered during static testing, regressor collinearity, and, while not conclusively proven, differences in static and dynamic boundary layer development. Overall, the damping estimates were consistent and repeatable, with low scatter over a 95% confidence interval. Finally, a basic open loop simulation was executed to demonstrate the instability of the aircraft. As a result, it is recommended that future work be performed to determine trim points and linear models for controls development.
Hippeläinen, Eero; Mäkelä, Teemu; Kaasalainen, Touko; Kaleva, Erna
2017-12-01
Developments in single photon emission tomography instrumentation and reconstruction methods present a potential for decreasing acquisition times. One of such recent options for myocardial perfusion imaging (MPI) is IQ-SPECT. This study was motivated by the inconsistency in the reported ejection fraction (EF) and left ventricular (LV) volume results between IQ-SPECT and more conventional low-energy high-resolution (LEHR) collimation protocols. IQ-SPECT and LEHR quantitative results were compared while the equivalent number of iterations (EI) was varied. The end-diastolic (EDV) and end-systolic volumes (ESV) and the derived EF values were investigated. A dynamic heart phantom was used to produce repeatable ESVs, EDVs and EFs. Phantom performance was verified by comparing the set EF values to those measured from a gated multi-slice X-ray computed tomography (CT) scan (EF True ). The phantom with an EF setting of 45, 55, 65 and 70% was imaged with both IQ-SPECT and LEHR protocols. The data were reconstructed with different EI, and two commonly used clinical myocardium delineation software were used to evaluate the LV volumes. The CT verification showed that the phantom EF settings were repeatable and accurate with the EF True being within 1% point from the manufacture's nominal value. Depending on EI both MPI protocols can be made to produce correct EF estimates, but IQ-SPECT protocol produced on average 41 and 42% smaller EDV and ESV when compared to the phantom's volumes, while LEHR protocol underestimated volumes by 24 and 21%, respectively. The volume results were largely similar between the delineation methods used. The reconstruction parameters can greatly affect the volume estimates obtained from perfusion studies. IQ-SPECT produces systematically smaller LV volumes than the conventional LEHR MPI protocol. The volume estimates are also software dependent.
Polarized neutron scattering study of the multiple order parameter system NdB4
NASA Astrophysics Data System (ADS)
Metoki, N.; Yamauchi, H.; Matsuda, M.; Fernandez-Baca, J. A.; Watanuki, R.; Hagihala, M.
2018-05-01
Neutron polarization analysis has been carried out in order to clarify the magnetic structures of multiple order parameter f -electron system NdB4. We confirmed the noncollinear "all-in all-out" structure (Γ4) of the in-plane moment, which is in good agreement with our previous neutron powder diffraction study. We found that the magnetic moment along the c -axis mc showed diagonally antiferromagnetic structure (Γ10), inconsistent with previously reported "vortex" structure (Γ2). The microscopic mixture of these two structures with q⃗0=(0 ,0 ,0 ) appears in phase II and remains stable in phases III and IV, where an incommensurate modulation coexists. The unusual magnetic ordering is phenomenologically understood via Landau theory with the primary order parameter Γ4 coupled with higher-order secondary order parameter Γ10. The magnetic moments were estimated to be 1.8 ±0.2 and 0.2 ±0.05 μB at T =7.5 K for Γ4 and Γ10, respectively. We also found a long-period incommensurate modulation of the q⃗1=(0 ,0 ,1 /2 ) antiferromagnetic structure of mc with the propagation q⃗s 1=(0.14 ,0.14 ,0.1 ) and q⃗s 2=(0.2 ,0 ,0.1 ) in phase III and IV, respectively. The amplitude of sinusoidal modulation was about mc=1.0 ±0.2 μB at T =1.5 K. The local (0 ,0 ,1 /2 ) structure consists of in-plane ferromagnetic and out-of-plane antiferromagnetic coupling of mc, opposite to the coexisting Γ10. The mc of Γ10 is significantly enhanced up to 0.6 μB at T =1.5 K, which is accompanied by the incommensurate modulations. The Landau phenomenological approach indicates that the higher-order magnetic and/or multipole interactions based on the pseudoquartet f -electron state play important roles.
Individual discount rates and smoking: evidence from a field experiment in Denmark.
Harrison, Glenn W; Lau, Morten I; Rutström, E Elisabet
2010-09-01
We elicit measures of individual discount rates from a representative sample of the Danish population and test two substantive hypotheses. The first hypothesis is that smokers have higher individual discount rates than non-smokers. The second hypothesis is that smokers are more likely to have time inconsistent preferences than non-smokers, where time inconsistency is indicated by a hyperbolic discounting function. We control for the concavity of the utility function in our estimates of individual discount rates and find that male smokers have significantly higher discount rates than male non-smokers. However, smoking has no significant association with discount rates among women. This result is robust across exponential and hyperbolic discounting functions. We consider the sensitivity of our conclusions to a statistical specification that allows each observation to potentially be generated by more than one latent data-generating process. Copyright 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Minster, J. B.; Jordan, T. H.
1977-01-01
A data set comprising 110 spreading rates, 78 transform fault azimuths and 142 earthquake slip vectors was inverted to yield a new instantaneous plate motion model, designated RM2. The mean averaging interval for the relative motion data was reduced to less than 3 My. A detailed comparison of RM2 with angular velocity vectors which best fit the data along individual plate boundaries indicates that RM2 performs close to optimally in most regions, with several notable exceptions. On the other hand, a previous estimate (RM1) failed to satisfy an extensive set of new data collected in the South Atlantic Ocean. It is shown that RM1 incorrectly predicts the plate kinematics in the South Atlantic because the presently available data are inconsistent with the plate geometry assumed in deriving RM1. It is demonstrated that this inconsistency can be remedied by postulating the existence of internal deformation with the Indian plate, although alternate explanations are possible.
Racial composition, unemployment, and crime: dealing with inconsistencies in panel designs.
Worrall, John L
2008-09-01
Racial composition and unemployment have appeared as either theoretically-relevant controls or variables of substantive interest in numerous studies of crime. While there is no clear consensus in the literature as to their statistical significance, the lack of consensus has been most apparent in panel analyses with unit fixed effects. One explanation for this is that racial composition and unemployment are fairly invariant, or slow-moving, which leads to collinearity with unit dummies. A number of pertinent studies are reviewed to illustrate how two slow-moving variables, percent black and percent unemployed, have behaved inconsistently. A fixed effects vector decomposition procedure [Plumper, V., Troeger, V. E., 2007. Efficient estimation of time-invariant and rarely changing variables in finite sample panel analyses with unit fixed effects. Political Analysis, 15, 124-139.] is used to illustrate how these variables' coefficients appear positive and significant when the slow-moving process is accounted for.
2015-01-01
Energetic carrying capacity of habitats for wildlife is a fundamental concept used to better understand population ecology and prioritize conservation efforts. However, carrying capacity can be difficult to estimate accurately and simplified models often depend on many assumptions and few estimated parameters. We demonstrate the complex nature of parameterizing energetic carrying capacity models and use an experimental approach to describe a necessary parameter, a foraging threshold (i.e., density of food at which animals no longer can efficiently forage and acquire energy), for a guild of migratory birds. We created foraging patches with different fixed prey densities and monitored the numerical and behavioral responses of waterfowl (Anatidae) and depletion of foods during winter. Dabbling ducks (Anatini) fed extensively in plots and all initial densities of supplemented seed were rapidly reduced to 10 kg/ha and other natural seeds and tubers combined to 170 kg/ha, despite different starting densities. However, ducks did not abandon or stop foraging in wetlands when seed reduction ceased approximately two weeks into the winter-long experiment nor did they consistently distribute according to ideal-free predictions during this period. Dabbling duck use of experimental plots was not related to initial seed density, and residual seed and tuber densities varied among plant taxa and wetlands but not plots. Herein, we reached several conclusions: 1) foraging effort and numerical responses of dabbling ducks in winter were likely influenced by factors other than total food densities (e.g., predation risk, opportunity costs, forager condition), 2) foraging thresholds may vary among foraging locations, and 3) the numerical response of dabbling ducks may be an inconsistent predictor of habitat quality relative to seed and tuber density. We describe implications on habitat conservation objectives of using different foraging thresholds in energetic carrying capacity models and suggest scientists reevaluate assumptions of these models used to guide habitat conservation. PMID:25790255
Too much ado about instrumental variable approach: is the cure worse than the disease?
Baser, Onur
2009-01-01
To review the efficacy of instrumental variable (IV) models in addressing a variety of assumption violations to ensure standard ordinary least squares (OLS) estimates are consistent. IV models gained popularity in outcomes research because of their ability to consistently estimate the average causal effects even in the presence of unmeasured confounding. However, in order for this consistent estimation to be achieved, several conditions must hold. In this article, we provide an overview of the IV approach, examine possible tests to check the prerequisite conditions, and illustrate how weak instruments may produce inconsistent and inefficient results. We use two IVs and apply Shea's partial R-square method, the Anderson canonical correlation, and Cragg-Donald tests to check for weak instruments. Hall-Peixe tests are applied to see if any of these instruments are redundant in the analysis. A total of 14,952 asthma patients from the MarketScan Commercial Claims and Encounters Database were examined in this study. Patient health care was provided under a variety of fee-for-service, fully capitated, and partially capitated health plans, including preferred provider organizations, point of service plans, indemnity plans, and health maintenance organizations. We used controller-reliever copay ratio and physician practice/prescribing patterns as an instrument. We demonstrated that the former was a weak and redundant instrument producing inconsistent and inefficient estimates of the effect of treatment. The results were worse than the results from standard regression analysis. Despite the obvious benefit of IV models, the method should not be used blindly. Several strong conditions are required for these models to work, and each of them should be tested. Otherwise, bias and precision of the results will be statistically worse than the results achieved by simply using standard OLS.
Microseismic Image-domain Velocity Inversion: Case Study From The Marcellus Shale
NASA Astrophysics Data System (ADS)
Shragge, J.; Witten, B.
2017-12-01
Seismic monitoring at injection wells relies on generating accurate location estimates of detected (micro-)seismicity. Event location estimates assist in optimizing well and stage spacings, assessing potential hazards, and establishing causation of larger events. The largest impediment to generating accurate location estimates is an accurate velocity model. For surface-based monitoring the model should capture 3D velocity variation, yet, rarely is the laterally heterogeneous nature of the velocity field captured. Another complication for surface monitoring is that the data often suffer from low signal-to-noise levels, making velocity updating with established techniques difficult due to uncertainties in the arrival picks. We use surface-monitored field data to demonstrate that a new method requiring no arrival picking can improve microseismic locations by jointly locating events and updating 3D P- and S-wave velocity models through image-domain adjoint-state tomography. This approach creates a complementary set of images for each chosen event through wave-equation propagation and correlating combinations of P- and S-wavefield energy. The method updates the velocity models to optimize the focal consistency of the images through adjoint-state inversions. We demonstrate the functionality of the method using a surface array of 192 three-component geophones over a hydraulic stimulation in the Marcellus Shale. Applying the proposed joint location and velocity-inversion approach significantly improves the estimated locations. To assess event location accuracy, we propose a new measure of inconsistency derived from the complementary images. By this measure the location inconsistency decreases by 75%. The method has implications for improving the reliability of microseismic interpretation with low signal-to-noise data, which may increase hydrocarbon extraction efficiency and improve risk assessment from injection related seismicity.
NASA Astrophysics Data System (ADS)
Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.
2012-01-01
Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K1 than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K1. For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
Zhang, Manyun; Wang, Jun; Bai, Shahla Hosseini; Teng, Ying; Xu, Zhihong
2018-06-02
Phytoremediation with biochar addition might alleviate pollutant toxicity to soil microorganism. It is uncertain to what extent biochar addition rate could affect activities of enzymes related to soil nitrogen (N) mineralization and alter fungal community under the phytoremediation. This study aimed to reveal the effects of Medicago sativa L. (alfalfa) phytoremediation, alone or with biochar additions, on soil protease and chitinase and fungal community and link the responses of microbial parameters with biochar addition rates. The alfalfa phytoremediation enhanced soil protease activities, and relative to the phytoremediation alone, biochar additions had inconsistent impacts on the corresponding functional gene abundances. Compared with the blank control, alfalfa phytoremediation, alone or with biochar additions, increased fungal biomass and community richness estimators. Moreover, relative to the phytoremediation alone, the relative abundances of phylum Zygomycota were also increased by biochar additions. The whole soil fungal community was not significantly changed by the alfalfa phytoremediation alone, but was indeed changed by alfalfa phytoremediation with 3.0% (w/w) or 6.0% biochar addition. This study suggested that alfalfa phytoremediation could enhance N mineralization enzyme activities and that biochar addition rates affected the responses of fungal community to the alfalfa phytoremediation.
Analysis of the influence of a metha-type metaphysical stem on biomechanical parameters.
Pozowski, Andrzej; Ścigała, Krzysztof; Kierzek, Andrzej; Paprocka-Borowicz, Małgorzata; Kuciel-Lewandowska, Jadwiga
2013-01-01
The full postoperative loading of the limb is possible if patients are properly selected and qualified for hip arthroplasty and the requirements as to the proper position of the metaphysial stem are met. The lack of precision, and patient qualification which does not satisfy the fixed criteria may result in stem setting inconsistent with the assumptions. An analysis based on the finite element method (FEM) will enable one to find out how to plan the magnitude of operated joint loading on the basis of the position of the stem in the postoperative radiograph. By analyzing the distribution of bone tissue deformations one can identify the zones where the spongy bone is overloaded and determine the strain level in comparison with the one determined for a model of the bone with the stem in proper position. On the basis of the results obtained one can estimate the range of loads for the operated limb, which will not result in the loss of the stem's primary stability prior to obtaining secondary stability through osteointegration. Moreover, an analysis of the formation of bone structures around the stem showed that the incorrect setting of a Metha-type stem may lead to the initiation of loosening.
Monitoring and evaluation of wire mesh forming life
NASA Astrophysics Data System (ADS)
Enemuoh, Emmanuel U.; Zhao, Ping; Kadlec, Alec
2018-03-01
Forming tables are used with stainless steel wire mesh conveyor belts to produce variety of products. The forming tables will typically run continuously for several days, with some hours of scheduled downtime for maintenance, cleaning and part replacement after several weeks of operation. The wire mesh conveyor belts show large variation in their remaining life due to associated variations in their nominal thicknesses. Currently the industry is dependent on seasoned operators to determine the replacement time for the wire mesh formers. The drawback of this approach is inconsistency in judgements made by different operators and lack of data knowledge that can be used to develop decision making system that will be more consistent with wire mesh life prediction and replacement time. In this study, diagnostic measurements about the health of wire mesh former is investigated and developed. The wire mesh quality characteristics considered are thermal measurement, tension property, gage thickness, and wire mesh wear. The results show that real time thermal sensor and wear measurements would provide suitable data for the estimation of wire mesh failure, therefore, can be used as a diagnostic parameter for developing structural health monitoring (SHM) system for stainless steel wire mesh formers.
Hobbs, Brian P.; Carlin, Bradley P.; Mandrekar, Sumithra J.; Sargent, Daniel J.
2011-01-01
Summary Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected Type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this paper, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen. PMID:21361892
Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.
Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H
2014-01-01
Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.
Nurse dose: what's in a concept?
Manojlovich, Milisa; Sidani, Souraya
2008-08-01
Many researchers have sought to address the relationship between nursing care and patient outcomes, with inconsistent and contradictory findings. We conducted a concept analysis and concept derivation, basing our work on theoretical and empirical literature, to derive nurse dose as a concept that pulls into a coherent whole disparate variables used in staffing studies. We defined nurse dose as the level of nursing reflected in the purity, amount, frequency, and duration of nursing care needed to produce favorable outcomes. All four parameters of nurse dose used together can facilitate our understanding of how nursing contributes to patient outcomes. Ongoing investigation will help to identify the parameters of nurse dose that have the greatest effect on outcomes. 2008 Wiley Periodicals, Inc
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo
2017-01-01
Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554
Apalasamy, Y D; Moy, F M; Rampal, S; Bulgiba, A; Mohamed, Z
2014-07-04
A genome-wide association study showed that the tagging single nucleotide polymorphism (SNP) rs7566605 in the insulin-induced gene 2 (INSIG2) was associated with obesity. Attempts to replicate this result in different populations have produced inconsistent findings. We aimed to study the association between the rs7566605 SNP with obesity and other metabolic parameters in Malaysian Malays. Anthropometric and obesity-related metabolic parameters and DNA samples were collected. We genotyped the rs7566605 polymorphism in 672 subjects using real-time polymerase chain reaction. No significant associations were found between the rs7566605 tagging SNP of INSIG2 with obesity or other metabolic parameters in the Malaysian Malay population. The INSIG2 rs7566605 SNP may not play a role in the development of obesity-related metabolic traits in Malaysian Malays.
Bibliography for aircraft parameter estimation
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Maine, Richard E.
1986-01-01
An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.
ERIC Educational Resources Information Center
Moss, Philippa; Howlin, Patricia; Savage, Sarah; Bolton, Patrick; Rutter, Michael
2015-01-01
Data on psychiatric problems in adults with autism are inconsistent, with estimated rates ranging from around 25% to over 75%. We assessed difficulties related to mental health in 58 adults with autism (10 females, 48 males; mean age 44?years) whom we have followed over four decades. All were of average non-verbal intelligence quotient when…
Two-dimensional advective transport in ground-water flow parameter estimation
Anderman, E.R.; Hill, M.C.; Poeter, E.P.
1996-01-01
Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.
Lyden, Hannah; Gimbel, Sarah I; Del Piero, Larissa; Tsai, A Bryna; Sachs, Matthew E; Kaplan, Jonas T; Margolin, Gayla; Saxbe, Darby
2016-01-01
Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used.
Rudolph, Kara E.; Sánchez, Brisa N.; Stuart, Elizabeth A.; Greenberg, Benjamin; Fujishiro, Kaori; Wand, Gary S.; Shrager, Sandi; Seeman, Teresa; Diez Roux, Ana V.; Golden, Sherita H.
2016-01-01
Evidence of the link between job strain and cortisol levels has been inconsistent. This could be due to failure to account for cortisol variability leading to underestimated standard errors. Our objective was to model the relationship between job strain and the whole cortisol curve, accounting for sources of cortisol variability. Our functional mixed-model approach incorporated all available data—18 samples over 3 days—and uncertainty in estimated relationships. We used employed participants from the Multi-Ethnic Study of Atherosclerosis Stress I Study and data collected between 2002 and 2006. We used propensity score matching on an extensive set of variables to control for sources of confounding. We found that job strain was associated with lower salivary cortisol levels and lower total area under the curve. We found no relationship between job strain and the cortisol awakening response. Our findings differed from those of several previous studies. It is plausible that our results were unique to middle- to older-aged racially, ethnically, and occupationally diverse adults and were therefore not inconsistent with previous research among younger, mostly white samples. However, it is also plausible that previous findings were influenced by residual confounding and failure to propagate uncertainty (i.e., account for the multiple sources of variability) in estimating cortisol features. PMID:26905339
Lyden, Hannah; Gimbel, Sarah I.; Del Piero, Larissa; Tsai, A. Bryna; Sachs, Matthew E.; Kaplan, Jonas T.; Margolin, Gayla; Saxbe, Darby
2016-01-01
Associations between brain structure and early adversity have been inconsistent in the literature. These inconsistencies may be partially due to methodological differences. Different methods of brain segmentation may produce different results, obscuring the relationship between early adversity and brain volume. Moreover, adolescence is a time of significant brain growth and certain brain areas have distinct rates of development, which may compromise the accuracy of automated segmentation approaches. In the current study, 23 adolescents participated in two waves of a longitudinal study. Family aggression was measured when the youths were 12 years old, and structural scans were acquired an average of 4 years later. Bilateral amygdalae and hippocampi were segmented using three different methods (manual tracing, FSL, and NeuroQuant). The segmentation estimates were compared, and linear regressions were run to assess the relationship between early family aggression exposure and all three volume segmentation estimates. Manual tracing results showed a positive relationship between family aggression and right amygdala volume, whereas FSL segmentation showed negative relationships between family aggression and both the left and right hippocampi. However, results indicate poor overlap between methods, and different associations were found between early family aggression exposure and brain volume depending on the segmentation method used. PMID:27656121
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling
Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min
2016-01-01
RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method. PMID:27690028
Chin, Calvin W L; Khaw, Hwan J; Luo, Elton; Tan, Shuwei; White, Audrey C; Newby, David E; Dweck, Marc R
2014-09-01
Discordance between small aortic valve area (AVA; < 1.0 cm(2)) and low mean pressure gradient (MPG; < 40 mm Hg) affects a third of patients with moderate or severe aortic stenosis (AS). We hypothesized that this is largely due to inaccurate echocardiographic measurements of the left ventricular outflow tract area (LVOTarea) and stroke volume alongside inconsistencies in recommended thresholds. One hundred thirty-three patients with mild to severe AS and 33 control individuals underwent comprehensive echocardiography and cardiovascular magnetic resonance imaging (MRI). Stroke volume and LVOTarea were calculated using echocardiography and MRI, and the effects on AVA estimation were assessed. The relationship between AVA and MPG measurements was then modelled with nonlinear regression and consistent thresholds for these parameters calculated. Finally the effect of these modified AVA measurements and novel thresholds on the number of patients with small-area low-gradient AS was investigated. Compared with MRI, echocardiography underestimated LVOTarea (n = 40; -0.7 cm(2); 95% confidence interval [CI], -2.6 to 1.3), stroke volumes (-6.5 mL/m(2); 95% CI, -28.9 to 16.0) and consequently, AVA (-0.23 cm(2); 95% CI, -1.01 to 0.59). Moreover, an AVA of 1.0 cm(2) corresponded to MPG of 24 mm Hg based on echocardiographic measurements and 37 mm Hg after correction with MRI-derived stroke volumes. Based on conventional measures, 56 patients had discordant small-area low-gradient AS. Using MRI-derived stroke volumes and the revised thresholds, a 48% reduction in discordance was observed (n = 29). Echocardiography underestimated LVOTarea, stroke volume, and therefore AVA, compared with MRI. The thresholds based on current guidelines were also inconsistent. In combination, these factors explain > 40% of patients with discordant small-area low-gradient AS. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David
2016-12-06
There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift generation with flapping wings.
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.
Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min
2016-09-27
RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Improved Estimates of Thermodynamic Parameters
NASA Technical Reports Server (NTRS)
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation
NASA Astrophysics Data System (ADS)
Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei
2018-04-01
Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.
Wolfsgruber, Steffen; Kleineidam, Luca; Wagner, Michael; Mösch, Edelgard; Bickel, Horst; Lϋhmann, Dagmar; Ernst, Annette; Wiese, Birgitt; Steinmann, Susanne; König, Hans-Helmut; Brettschneider, Christian; Luck, Tobias; Stein, Janine; Weyerer, Siegfried; Werle, Jochen; Pentzek, Michael; Fuchs, Angela; Maier, Wolfgang; Scherer, Martin; Riedel-Heller, Steffi G; Jessen, Frank
2016-10-04
It is unknown whether longitudinal stability versus instability in subjective cognitive decline (SCD) is a modifying factor of the association between SCD and risk of incident Alzheimer's disease (AD) dementia. We tested the modifying role of temporal stability of the SCD report on AD dementia risk in cognitively normal elderly individuals. We analyzed data of 1,990 cognitively normal participants from the longitudinal AgeCoDe Study. We assessed SCD with/without associated worries both at baseline and first follow-up 18 months later. Participants were then classified either as (a) Controls (CO, with no SCD at both baseline and follow-up 1, n = 613), (b) inconsistent SCD (with SCD reported only at baseline or at follow-up 1, n = 637), (c) consistent SCD but without/or with inconsistent worries (n = 610) or (d) consistent SCD with worries (n = 130). We estimated incident AD dementia risk over up to 6 years for each group with Cox-Proportional Hazard Regression analyses adjusted for age, gender, education, ApoE4 status, and depression. Compared to CO, inconsistent SCD was not associated with increased risk of incident AD dementia. In contrast, risk was doubled in the group of consistent SCD without/ with inconsistent worries, and almost 4-fold in the group of consistent SCD with worries. These results could be replicated when using follow-up 1 to follow-up 2 response patterns for group definition. These findings suggest that longitudinal stability versus instability is an important modifying factor of the association between SCD and AD dementia risk. Worrisome SCD that is also consistently reported over time is associated with greatly increased risk of AD dementia.
Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter
Reddy, Chinthala P.; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956
Reddy, Chinthala P; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine
2002-01-01
The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.
Multi-objective optimization in quantum parameter estimation
NASA Astrophysics Data System (ADS)
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Cooley, Richard L.
1983-01-01
This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metoki, Naoto; Yamauchi, Hiroki; Matsuda, Masaaki
Neutron polarization analysis has been carried out in order to clarify the magnetic structures of multiple order parameter f-electron system NdB 4. We confirmed the noncollinear “all-in all-out” structure (Γ 4) of the in-plane moment, which is in good agreement with our previous neutron powder diffraction study. We found that the magnetic moment along the c-axis m c showed diagonally antiferromagnetic structure (Γ 10), inconsistent with previously reported “vortex” structure (Γ 2). The microscopic mixture of these two structures with →q 0=(0,0,0) appears in phase II and remains stable in phases III and IV, where an incommensurate modulation coexists. Themore » unusual magnetic ordering is phenomenologically understood via Landau theory with the primary order parameter Γ 4 coupled with higher-order secondary order parameter Γ 10. The magnetic moments were estimated to be 1.8 ± 0.2 and 0.2 ± 0.05μ B at T = 7.5K for Γ 4 and Γ 10, respectively. We also found a long-period incommensurate modulation of the →q 1=(0,0,1/2) antiferromagnetic structure of mc with the propagation →q s1=(0.14,0.14,0.1) and →q s2=(0.2,0,0.1) in phase III and IV, respectively. The amplitude of sinusoidal modulation was about m c=1.0 ± 0.2μ B at T=1.5 K. The local (0,0,1/2) structure consists of in-plane ferromagnetic and out-of-plane antiferromagnetic coupling of m c, opposite to the coexisting Γ 10. The mc of Γ 10 is significantly enhanced up to 0.6μ B at T=1.5 K, which is accompanied by the incommensurate modulations. As a result, the Landau phenomenological approach indicates that the higher-order magnetic and/or multipole interactions based on the pseudoquartet f-electron state play important roles.« less
Metoki, Naoto; Yamauchi, Hiroki; Matsuda, Masaaki; ...
2018-05-17
Neutron polarization analysis has been carried out in order to clarify the magnetic structures of multiple order parameter f-electron system NdB 4. We confirmed the noncollinear “all-in all-out” structure (Γ 4) of the in-plane moment, which is in good agreement with our previous neutron powder diffraction study. We found that the magnetic moment along the c-axis m c showed diagonally antiferromagnetic structure (Γ 10), inconsistent with previously reported “vortex” structure (Γ 2). The microscopic mixture of these two structures with →q 0=(0,0,0) appears in phase II and remains stable in phases III and IV, where an incommensurate modulation coexists. Themore » unusual magnetic ordering is phenomenologically understood via Landau theory with the primary order parameter Γ 4 coupled with higher-order secondary order parameter Γ 10. The magnetic moments were estimated to be 1.8 ± 0.2 and 0.2 ± 0.05μ B at T = 7.5K for Γ 4 and Γ 10, respectively. We also found a long-period incommensurate modulation of the →q 1=(0,0,1/2) antiferromagnetic structure of mc with the propagation →q s1=(0.14,0.14,0.1) and →q s2=(0.2,0,0.1) in phase III and IV, respectively. The amplitude of sinusoidal modulation was about m c=1.0 ± 0.2μ B at T=1.5 K. The local (0,0,1/2) structure consists of in-plane ferromagnetic and out-of-plane antiferromagnetic coupling of m c, opposite to the coexisting Γ 10. The mc of Γ 10 is significantly enhanced up to 0.6μ B at T=1.5 K, which is accompanied by the incommensurate modulations. As a result, the Landau phenomenological approach indicates that the higher-order magnetic and/or multipole interactions based on the pseudoquartet f-electron state play important roles.« less
Timing and climate forcing of volcanic eruptions for the past 2,500 years.
Sigl, M; Winstrup, M; McConnell, J R; Welten, K C; Plunkett, G; Ludlow, F; Büntgen, U; Caffee, M; Chellman, N; Dahl-Jensen, D; Fischer, H; Kipfstuhl, S; Kostick, C; Maselli, O J; Mekhaldi, F; Mulvaney, R; Muscheler, R; Pasteris, D R; Pilcher, J R; Salzer, M; Schüpbach, S; Steffensen, J P; Vinther, B M; Woodruff, T E
2015-07-30
Volcanic eruptions contribute to climate variability, but quantifying these contributions has been limited by inconsistencies in the timing of atmospheric volcanic aerosol loading determined from ice cores and subsequent cooling from climate proxies such as tree rings. Here we resolve these inconsistencies and show that large eruptions in the tropics and high latitudes were primary drivers of interannual-to-decadal temperature variability in the Northern Hemisphere during the past 2,500 years. Our results are based on new records of atmospheric aerosol loading developed from high-resolution, multi-parameter measurements from an array of Greenland and Antarctic ice cores as well as distinctive age markers to constrain chronologies. Overall, cooling was proportional to the magnitude of volcanic forcing and persisted for up to ten years after some of the largest eruptive episodes. Our revised timescale more firmly implicates volcanic eruptions as catalysts in the major sixth-century pandemics, famines, and socioeconomic disruptions in Eurasia and Mesoamerica while allowing multi-millennium quantification of climate response to volcanic forcing.
Timing and climate forcing of volcanic eruptions for the past 2,500 years
NASA Astrophysics Data System (ADS)
Sigl, M.; Winstrup, M.; McConnell, J. R.; Welten, K. C.; Plunkett, G.; Ludlow, F.; Büntgen, U.; Caffee, M.; Chellman, N.; Dahl-Jensen, D.; Fischer, H.; Kipfstuhl, S.; Kostick, C.; Maselli, O. J.; Mekhaldi, F.; Mulvaney, R.; Muscheler, R.; Pasteris, D. R.; Pilcher, J. R.; Salzer, M.; Schüpbach, S.; Steffensen, J. P.; Vinther, B. M.; Woodruff, T. E.
2015-07-01
Volcanic eruptions contribute to climate variability, but quantifying these contributions has been limited by inconsistencies in the timing of atmospheric volcanic aerosol loading determined from ice cores and subsequent cooling from climate proxies such as tree rings. Here we resolve these inconsistencies and show that large eruptions in the tropics and high latitudes were primary drivers of interannual-to-decadal temperature variability in the Northern Hemisphere during the past 2,500 years. Our results are based on new records of atmospheric aerosol loading developed from high-resolution, multi-parameter measurements from an array of Greenland and Antarctic ice cores as well as distinctive age markers to constrain chronologies. Overall, cooling was proportional to the magnitude of volcanic forcing and persisted for up to ten years after some of the largest eruptive episodes. Our revised timescale more firmly implicates volcanic eruptions as catalysts in the major sixth-century pandemics, famines, and socioeconomic disruptions in Eurasia and Mesoamerica while allowing multi-millennium quantification of climate response to volcanic forcing.
A new estimate of average dipole field strength for the last five million years
NASA Astrophysics Data System (ADS)
Cromwell, G.; Tauxe, L.; Halldorsson, S. A.
2013-12-01
The Earth's ancient magnetic field can be approximated by a geocentric axial dipole (GAD) where the average field intensity is twice as strong at the poles than at the equator. The present day geomagnetic field, and some global paleointensity datasets, support the GAD hypothesis with a virtual axial dipole moment (VADM) of about 80 ZAm2. Significant departures from GAD for 0-5 Ma are found in Antarctica and Iceland where paleointensity experiments on massive flows (Antarctica) (1) and volcanic glasses (Iceland) produce average VADM estimates of 41.4 ZAm2 and 59.5 ZAm2, respectively. These combined intensities are much closer to a lower estimate for long-term dipole field strength, 50 ZAm2 (2), and some other estimates of average VADM based on paleointensities strictly from volcanic glasses. Proposed explanations for the observed non-GAD behavior, from otherwise high-quality paleointensity results, include incomplete temporal sampling, effects from the tangent cylinder, and hemispheric asymmetry. Differences in estimates of average magnetic field strength likely arise from inconsistent selection protocols and experiment methodologies. We address these possible biases and estimate the average dipole field strength for the last five million years by compiling measurement level data of IZZI-modified paleointensity experiments from lava flows around the globe (including new results from Iceland and the HSDP-2 Hawaii drill core). We use the Thellier Gui paleointensity interpreter (3) in order to apply objective criteria to all specimens, ensuring consistency between sites. Specimen level selection criteria are determined from a recent paleointensity investigation of modern Hawaiian lava flows where the expected magnetic field strength was accurately recovered when following certain selection parameters. Our new estimate of average dipole field strength for the last five million years incorporates multiple paleointensity studies on lava flows with diverse global and temporal distributions, and objectively constrains site level estimates by applying uniform selection requirements on measurement level data. (1) Lawrence, K.P., L. Tauxe, H. Staudigel, C.G. Constable, A. Koppers, W. McIntosh, C.L. Johnson, Paleomagnetic field properties at high southern latitude, Geochemistry Geophysics Geosystems, 10, 2009. (2) Selkin, P.A., L. Tauxe, Long-term variations in palaeointensity, Phil. Trans. R. Soc. Lond., 358, 1065-1088, 2000. (3) Shaar, R., L. Tauxe, Thellier GUI: An integrated tool for analyzing paleointensity data from Thellier-type experiments, Geochemistry Geophysics Geosystems, 14, 2013
Gradient Phonological Inconsistency Affects Vocabulary Learning
ERIC Educational Resources Information Center
Muench, Kristin L.; Creel, Sarah C.
2013-01-01
Learners frequently experience phonologically inconsistent input, such as exposure to multiple accents. Yet, little is known about the consequences of phonological inconsistency for language learning. The current study examines vocabulary acquisition with different degrees of phonological inconsistency, ranging from no inconsistency (e.g., both…
Waller, Niels G; Feuerstahler, Leah
2017-01-01
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
Doebel, Sabine; Rowell, Shaina F.; Koenig, Melissa A.
2016-01-01
The reported research tested the hypothesis that young children detect logical inconsistency in communicative contexts that support the evaluation of speakers’ epistemic reliability. In two experiments (N = 194), 3- to 5-year-olds were presented with two speakers who expressed logically consistent or inconsistent claims. Three-year-olds failed to detect inconsistencies (Experiment 1), 4-year-olds detected inconsistencies when expressed by human speakers but not when read from books, and 5-year-olds detected inconsistencies in both contexts (Experiment 2). In both experiments, children demonstrated skepticism toward testimony from previously inconsistent sources. Executive function and working memory each predicted inconsistency detection. These findings indicate logical inconsistency understanding emerges in early childhood, is supported by social and domain general cognitive skills, and plays a role in adaptive learning from testimony. PMID:27317511
Control system estimation and design for aerospace vehicles
NASA Technical Reports Server (NTRS)
Stefani, R. T.; Williams, T. L.; Yakowitz, S. J.
1972-01-01
The selection of an estimator which is unbiased when applied to structural parameter estimation is discussed. The mathematical relationships for structural parameter estimation are defined. It is shown that a conventional weighted least squares (CWLS) estimate is biased when applied to structural parameter estimation. Two approaches to bias removal are suggested: (1) change the CWLS estimator or (2) change the objective function. The advantages of each approach are analyzed.
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
ERIC Educational Resources Information Center
Finch, Holmes; Edwards, Julianne M.
2016-01-01
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
NASA Astrophysics Data System (ADS)
Sedaghat, A.; Bayat, H.; Safari Sinegani, A. A.
2016-03-01
The saturated hydraulic conductivity ( K s ) of the soil is one of the main soil physical properties. Indirect estimation of this parameter using pedo-transfer functions (PTFs) has received considerable attention. The Purpose of this study was to improve the estimation of K s using fractal parameters of particle and micro-aggregate size distributions in smectitic soils. In this study 260 disturbed and undisturbed soil samples were collected from Guilan province, the north of Iran. The fractal model of Bird and Perrier was used to compute the fractal parameters of particle and micro-aggregate size distributions. The PTFs were developed by artificial neural networks (ANNs) ensemble to estimate K s by using available soil data and fractal parameters. There were found significant correlations between K s and fractal parameters of particles and microaggregates. Estimation of K s was improved significantly by using fractal parameters of soil micro-aggregates as predictors. But using geometric mean and geometric standard deviation of particles diameter did not improve K s estimations significantly. Using fractal parameters of particles and micro-aggregates simultaneously, had the most effect in the estimation of K s . Generally, fractal parameters can be successfully used as input parameters to improve the estimation of K s in the PTFs in smectitic soils. As a result, ANNs ensemble successfully correlated the fractal parameters of particles and micro-aggregates to K s .
NASA Astrophysics Data System (ADS)
Shi, Zongyang; Liu, Lihua; Xiao, Pan; Geng, Zhi; Liu, Fubo; Fang, Guangyou
2018-02-01
An ungrounded loop in the shallow subsurface transient electromagnetic surveys has been studied as the transmission line model for early turn-off stage, which can accurately explicate the early turn-off current waveform inconsistency along the loop. In this paper, the Gauss-Legendre numerical integration method is proposed for the first time to simulate and analyze the transient electromagnetic (TEM) response considering the different early turn-off current waveforms along the loop. During the simulation, these integral node positions along the loop are firstly determined by solving these zero points of Legendre polynomial, then the turn-off current of each node position is simulated by using the transfer function of the transmission line. Finally, the total TEM response is calculated by using the Gauss-Legendre integral formula. In addition, the comparison and analysis between the results affected by the distributed parameters and that generated by lumped parameters are presented. It is found that the TEM responses agree well with each other after current is thoroughly switched off, while the transient responses in turn-off stage are completely different. It means that the position dependence of the early turn-off current should be introduced into the forward model during the early response data interpretation of the shallow TEM detection of the ungrounded loop. Furthermore, the TEM response simulations at four geometric symmetry points are made. It shows that early responses of different geometric symmetry points are also inconsistent. The research on the influence of turn-off current position dependence on the early response of geometric symmetry point is of great significance to guide the layout of the survey lines and the transmitter location.
Self-reported cognitive inconsistency in older adults.
Vanderhill, Susan; Hultsch, David F; Hunter, Michael A; Strauss, Esther
2010-01-01
Insight into one's own cognitive abilities, or metacognition, has been widely studied in developmental psychology. Relevance to the clinician is high, as memory complaints in older adults show an association with impending dementia, even after controlling for likely confounds. Another candidate marker of impending dementia under study is inconsistency in cognitive performance over short time intervals. Although there has been a recent proliferation of studies of cognitive inconsistency in older adults, to date, no one has examined adults' self-perceptions of cognitive inconsistency. Ninety-four community-dwelling older adults (aged 70-91) were randomly selected from a parent longitudinal study of short-term inconsistency and long-term cognitive change in aging. Participants completed a novel 40-item self-report measure of everyday cognitive inconsistency, including parallel scales indexing perceived inconsistency 5 years ago and at present, yielding measures of past, present, and 5-year change in inconsistency. The questionnaire showed acceptable psychometric characteristics. The sample reported an increase in perceived inconsistency over time. Higher reported present inconsistency and greater 5-year increase in inconsistency were associated with noncognitive (e.g., older age, poorer ADLs, poorer health, higher depression), metacognitive (e.g., poorer self-rated memory) and neuropsychological (e.g., poorer performance and greater 5-year decline in global cognitive status, vocabulary, and memory) measures. Correlations between self-reported inconsistency and neuropsychological performance were attenuated, but largely persisted when self-rated memory and age were controlled. Observed relationships between self-reported inconsistency and measures of neuropsychological (including memory) status and decline suggest that self-perceived inconsistency may be an area of relevance in evaluating older adults for memory disorders.
Estimates of Self, Parental and Partner Multiple Intelligences in Iran: A replication and extension
Kosari, Afrooz; Swami, Viren
2012-01-01
Two hundred and fifty-eight Iranian university students estimated their own, parents’, and partners’ overall (general) intelligence, and also estimated 13 ‘multiple intelligences’ on a simple, two-page questionnaire which was previously used in many similar studies. In accordance with previous research, men rated themselves higher than women on logical-mathematical, spatial and musical intelligence. There were, however, no sex differences in ratings of parental and partner multiple intelligences, which is inconsistent with the extant literature. Participants also believed that they were more intelligent than their parents and partners, and that their fathers were more intelligent than their mothers. Multiple regressions indicated that participants’ Big Five personality typologies and test experience were significant predictors of self-estimated intelligence. These results are discussed in terms of the cross-cultural literature in the field. Implications of the results are also considered. PMID:22952548
Estimates of Self, Parental and Partner Multiple Intelligences in Iran: A replication and extension.
Furnham, Adrian; Kosari, Afrooz; Swami, Viren
2012-01-01
Two hundred and fifty-eight Iranian university students estimated their own, parents', and partners' overall (general) intelligence, and also estimated 13 'multiple intelligences' on a simple, two-page questionnaire which was previously used in many similar studies. In accordance with previous research, men rated themselves higher than women on logical-mathematical, spatial and musical intelligence. There were, however, no sex differences in ratings of parental and partner multiple intelligences, which is inconsistent with the extant literature. Participants also believed that they were more intelligent than their parents and partners, and that their fathers were more intelligent than their mothers. Multiple regressions indicated that participants' Big Five personality typologies and test experience were significant predictors of self-estimated intelligence. These results are discussed in terms of the cross-cultural literature in the field. Implications of the results are also considered.
Adaptive Modal Identification for Flutter Suppression Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Drew, Michael; Swei, Sean S.
2016-01-01
In this paper, we will develop an adaptive modal identification method for identifying the frequencies and damping of a flutter mode based on model-reference adaptive control (MRAC) and least-squares methods. The least-squares parameter estimation will achieve parameter convergence in the presence of persistent excitation whereas the MRAC parameter estimation does not guarantee parameter convergence. Two adaptive flutter suppression control approaches are developed: one based on MRAC and the other based on the least-squares method. The MRAC flutter suppression control is designed as an integral part of the parameter estimation where the feedback signal is used to estimate the modal information. On the other hand, the separation principle of control and estimation is applied to the least-squares method. The least-squares modal identification is used to perform parameter estimation.
Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses
Link, W.A.; Sauer, J.R.
1996-01-01
Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.
NASA Astrophysics Data System (ADS)
Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke
2010-01-01
The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Spits, Christine; Wallace, Luke; Reinke, Karin
2017-01-01
Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential. PMID:28425957
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
ERIC Educational Resources Information Center
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
A Comparative Study of Distribution System Parameter Estimation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less
Price, Malcolm J; Ades, A E; Soldan, Kate; Welton, Nicky J; Macleod, John; Simms, Ian; DeAngelis, Daniela; Turner, Katherine Me; Horner, Paddy J
2016-03-01
The evidence base supporting the National Chlamydia Screening Programme, initiated in 2003, has been questioned repeatedly, with little consensus on modelling assumptions, parameter values or evidence sources to be used in cost-effectiveness analyses. The purpose of this project was to assemble all available evidence on the prevalence and incidence of Chlamydia trachomatis (CT) in the UK and its sequelae, pelvic inflammatory disease (PID), ectopic pregnancy (EP) and tubal factor infertility (TFI) to review the evidence base in its entirety, assess its consistency and, if possible, arrive at a coherent set of estimates consistent with all the evidence. Evidence was identified using 'high-yield' strategies. Bayesian Multi-Parameter Evidence Synthesis models were constructed for separate subparts of the clinical and population epidemiology of CT. Where possible, different types of data sources were statistically combined to derive coherent estimates. Where evidence was inconsistent, evidence sources were re-interpreted and new estimates derived on a post-hoc basis. An internally coherent set of estimates was generated, consistent with a multifaceted evidence base, fertility surveys and routine UK statistics on PID and EP. Among the key findings were that the risk of PID (symptomatic or asymptomatic) following an untreated CT infection is 17.1% [95% credible interval (CrI) 6% to 29%] and the risk of salpingitis is 7.3% (95% CrI 2.2% to 14.0%). In women aged 16-24 years, screened at annual intervals, at best, 61% (95% CrI 55% to 67%) of CT-related PID and 22% (95% CrI 7% to 43%) of all PID could be directly prevented. For women aged 16-44 years, the proportions of PID, EP and TFI that are attributable to CT are estimated to be 20% (95% CrI 6% to 38%), 4.9% (95% CrI 1.2% to 12%) and 29% (95% CrI 9% to 56%), respectively. The prevalence of TFI in the UK in women at the end of their reproductive lives is 1.1%: this is consistent with all PID carrying a relatively high risk of reproductive damage, whether diagnosed or not. Every 1000 CT infections in women aged 16-44 years, on average, gives rise to approximately 171 episodes of PID and 73 of salpingitis, 2.0 EPs and 5.1 women with TFI at age 44 years. The study establishes a set of interpretations of the major studies and study designs, under which a coherent set of estimates can be generated. CT is a significant cause of PID and TFI. CT screening is of benefit to the individual, but detection and treatment of incident infection may be more beneficial. Women with lower abdominal pain need better advice on when to seek early medical attention to avoid risk of reproductive damage. The study provides new insights into the reproductive risks of PID and the role of CT. Further research is required on the proportions of PID, EP and TFI attributable to CT to confirm predictions made in this report, and to improve the precision of key estimates. The cost-effectiveness of screening should be re-evaluated using the findings of this report. The Medical Research Council grant G0801947.
Reliability and validity of the Parenting Scale of Inconsistency.
Yoshizumi, Takahiro; Murase, Satomi; Murakami, Takashi; Takai, Jiro
2006-08-01
The purposes of the present study were to develop a Parenting Scale of Inconsistency and to evaluate its initial reliability and validity. The 12 items assess the inconsistency among parents' moods, behaviors, and attitudes toward children. In the primary study, 517 participants completed three measures: the new Parenting Scale of Inconsistency, the Parental Bonding Instrument, and the Depression Scale of the General Health Questionnaire. The Parenting Scale of Inconsistency had good test-retest reliability of .85 and internal consistency of .88 (Cronbach coefficient alpha). Construct validity was good as Inconsistency scores were significantly correlated with the Care and Overprotection scores of the Parental Bonding Instrument and with the Depression scores. Moreover, Inconsistency scores' relation with a dimension of parenting style distinct from Care and Overprotection suggested that the Parenting Scale of Inconsistency had factorial validity. This scale seems a potential measure for examining the relationships between inconsistent parenting and the mental health of children.
An Exploration of Changes in the Measurement of Mammography in the National Health Interview Survey.
Gonzales, Felisa A; Willis, Gordon B; Breen, Nancy; Yan, Ting; Cronin, Kathy A; Taplin, Stephen H; Yu, Mandi
2017-11-01
Background: Using the National Health Interview Survey (NHIS), we examined the effect of question wording on estimates of past-year mammography among racially/ethnically diverse women ages 40-49 and 50-74 without a history of breast cancer. Methods: Data from one-part ("Have you had a mammogram during the past 12 months?") and two-part ("Have you ever had a mammogram"; "When did you have your most recent mammogram?") mammography history questions administered in the 2008, 2011, and 2013 NHIS were analyzed. χ 2 tests provided estimates of changes in mammography when question wording was either the same (two-part question) or differed (two-part question followed by one-part question) in the two survey years compared. Crosstabulations and regression models assessed the type, extent, and correlates of inconsistent responses to the two questions in 2013. Results: Reports of past-year mammography were slightly higher in years when the one-part question was asked than when the two-part question was asked. Nearly 10% of women provided inconsistent responses to the two questions asked in 2013. Black women ages 50 to 74 [adjusted OR (aOR), 1.50; 95% confidence interval (CI), 1.16-1.93] and women ages 40-49 in poor health (aOR, 2.22; 95% CI, 1.09-4.52) had higher odds of inconsistent responses; women without a usual source of care had lower odds (40-49: aOR, 0.42; 95% CI, 0.21-0.85; 50-74: aOR, 0.42; 95% CI, 0.24-0.74). Conclusions: Self-reports of mammography are sensitive to question wording. Researchers should use equivalent questions that have been designed to minimize response biases such as telescoping and social desirability. Impact: Trend analyses relying on differently worded questions may be misleading and conceal disparities. Cancer Epidemiol Biomarkers Prev; 26(11); 1611-8. ©2017 AACR . ©2017 American Association for Cancer Research.
Deng, Qingqiong; Zhou, Mingquan; Wu, Zhongke; Shui, Wuyang; Ji, Yuan; Wang, Xingce; Liu, Ching Yiu Jessica; Huang, Youliang; Jiang, Haiyan
2016-02-01
Craniofacial reconstruction recreates a facial outlook from the cranium based on the relationship between the face and the skull to assist identification. But craniofacial structures are very complex, and this relationship is not the same in different craniofacial regions. Several regional methods have recently been proposed, these methods segmented the face and skull into regions, and the relationship of each region is then learned independently, after that, facial regions for a given skull are estimated and finally glued together to generate a face. Most of these regional methods use vertex coordinates to represent the regions, and they define a uniform coordinate system for all of the regions. Consequently, the inconsistence in the positions of regions between different individuals is not eliminated before learning the relationships between the face and skull regions, and this reduces the accuracy of the craniofacial reconstruction. In order to solve this problem, an improved regional method is proposed in this paper involving two types of coordinate adjustments. One is the global coordinate adjustment performed on the skulls and faces with the purpose to eliminate the inconsistence of position and pose of the heads; the other is the local coordinate adjustment performed on the skull and face regions with the purpose to eliminate the inconsistence of position of these regions. After these two coordinate adjustments, partial least squares regression (PLSR) is used to estimate the relationship between the face region and the skull region. In order to obtain a more accurate reconstruction, a new fusion strategy is also proposed in the paper to maintain the reconstructed feature regions when gluing the facial regions together. This is based on the observation that the feature regions usually have less reconstruction errors compared to rest of the face. The results demonstrate that the coordinate adjustments and the new fusion strategy can significantly improve the craniofacial reconstructions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
2011-01-01
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173
Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.
2017-01-01
A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.
Hill, Mary Catherine
1992-01-01
This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.
Model-independent analysis of quark mass matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhury, D.; Sarkar, U.
1989-06-01
In view of the apparent inconsistency of the Stech, Fritzsch-Stech, and Fritzsch-Shin models and only marginal agreement of the Fritzsch and modified Fritzsch-Stech models with recent data on /ital B//sub /ital d///sup 0/-/bar B/ /sub /ital d///sup 0/ mixing, we analyze the general quark mass matrices for three generations. Phenomenological considerations restrict the range of parameters involved to different sectors. In the present framework, the constraints corresponding to various /ital Ansa/$/ital uml/---/ital tze/ have been discussed.
Nonlinear adaptive control system design with asymptotically stable parameter estimation error
NASA Astrophysics Data System (ADS)
Mishkov, Rumen; Darmonski, Stanislav
2018-01-01
The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.
Data-Adaptive Bias-Reduced Doubly Robust Estimation.
Vermeulen, Karel; Vansteelandt, Stijn
2016-05-01
Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.
Chin, Wen Cheong; Lee, Min Cherng; Yap, Grace Lee Ching
2016-01-01
High frequency financial data modelling has become one of the important research areas in the field of financial econometrics. However, the possible structural break in volatile financial time series often trigger inconsistency issue in volatility estimation. In this study, we propose a structural break heavy-tailed heterogeneous autoregressive (HAR) volatility econometric model with the enhancement of jump-robust estimators. The breakpoints in the volatility are captured by dummy variables after the detection by Bai-Perron sequential multi breakpoints procedure. In order to further deal with possible abrupt jump in the volatility, the jump-robust volatility estimators are composed by using the nearest neighbor truncation approach, namely the minimum and median realized volatility. Under the structural break improvements in both the models and volatility estimators, the empirical findings show that the modified HAR model provides the best performing in-sample and out-of-sample forecast evaluations as compared with the standard HAR models. Accurate volatility forecasts have direct influential to the application of risk management and investment portfolio analysis.
da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G
2016-07-08
Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.
Estimation of the Parameters in a Two-State System Coupled to a Squeezed Bath
NASA Astrophysics Data System (ADS)
Hu, Yao-Hua; Yang, Hai-Feng; Tan, Yong-Gang; Tao, Ya-Ping
2018-04-01
Estimation of the phase and weight parameters of a two-state system in a squeezed bath by calculating quantum Fisher information is investigated. The results show that, both for the phase estimation and for the weight estimation, the quantum Fisher information always decays with time and changes periodically with the phases. The estimation precision can be enhanced by choosing the proper values of the phases and the squeezing parameter. These results can be provided as an analysis reference for the practical application of the parameter estimation in a squeezed bath.
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
Robust gaze-steering of an active vision system against errors in the estimated parameters
NASA Astrophysics Data System (ADS)
Han, Youngmo
2015-01-01
Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.
An Evaluation of Hierarchical Bayes Estimation for the Two- Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho
Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item parameters. Simulated data sets were analyzed using two different Bayes estimation procedures, the two-stage hierarchical Bayes estimation (HB2) and the marginal Bayesian with known hyperparameters (MB), and marginal maximum…
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.
Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B
2005-06-01
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
NASA Technical Reports Server (NTRS)
Suit, W. T.; Cannaday, R. L.
1979-01-01
The longitudinal and lateral stability and control parameters for a high wing, general aviation, airplane are examined. Estimations using flight data obtained at various flight conditions within the normal range of the aircraft are presented. The estimations techniques, an output error technique (maximum likelihood) and an equation error technique (linear regression), are presented. The longitudinal static parameters are estimated from climbing, descending, and quasi steady state flight data. The lateral excitations involve a combination of rudder and ailerons. The sensitivity of the aircraft modes of motion to variations in the parameter estimates are discussed.
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
A variational approach to parameter estimation in ordinary differential equations.
Kaschek, Daniel; Timmer, Jens
2012-08-14
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
Helsel, Dennis R.; Gilliom, Robert J.
1986-01-01
Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.
Parameter estimation of qubit states with unknown phase parameter
NASA Astrophysics Data System (ADS)
Suzuki, Jun
2015-02-01
We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.
Image informative maps for component-wise estimating parameters of signal-dependent noise
NASA Astrophysics Data System (ADS)
Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem
2013-01-01
We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.
Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.
Transit Project Planning Guidance : Estimation of Transit Supply Parameters
DOT National Transportation Integrated Search
1984-04-01
This report discusses techniques applicable to the estimation of transit vehicle fleet requirements, vehicle-hours and vehicle-miles, and other related transit supply parameters. These parameters are used for estimating operating costs and certain ca...
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
SBML-PET: a Systems Biology Markup Language-based parameter estimation tool.
Zi, Zhike; Klipp, Edda
2006-11-01
The estimation of model parameters from experimental data remains a bottleneck for a major breakthrough in systems biology. We present a Systems Biology Markup Language (SBML) based Parameter Estimation Tool (SBML-PET). The tool is designed to enable parameter estimation for biological models including signaling pathways, gene regulation networks and metabolic pathways. SBML-PET supports import and export of the models in the SBML format. It can estimate the parameters by fitting a variety of experimental data from different experimental conditions. SBML-PET has a unique feature of supporting event definition in the SMBL model. SBML models can also be simulated in SBML-PET. Stochastic Ranking Evolution Strategy (SRES) is incorporated in SBML-PET for parameter estimation jobs. A classic ODE Solver called ODEPACK is used to solve the Ordinary Differential Equation (ODE) system. http://sysbio.molgen.mpg.de/SBML-PET/. The website also contains detailed documentation for SBML-PET.
Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J; Murtha, Michael T; Hus, Vanessa; Lowe, Jennifer K; Willsey, A Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E; Ledbetter, David H; Lord, Catherine; Mane, Shrikant M; Lese Martin, Christa; Martin, Donna M; Morrow, Eric M; Walsh, Christopher A; Sutcliffe, James S; State, Matthew W; Devlin, Bernie; Cook, Edwin H; Kim, Soo-Jeong
2013-10-15
Brain development follows a different trajectory in children with autism spectrum disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. Gender, age, height, weight, genetic ancestry, and ASD status were significant predictors of HC (estimate of the ASD effect = .2 cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait, and population norms for HC would be far more accurate if covariates including genetic ancestry, height, and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. © 2013 Society of Biological Psychiatry.
Hens, Koen; Berth, Mario; Armbruster, Dave; Westgard, Sten
2014-07-01
Six Sigma metrics were used to assess the analytical quality of automated clinical chemistry and immunoassay tests in a large Belgian clinical laboratory and to explore the importance of the source used for estimation of the allowable total error. Clinical laboratories are continually challenged to maintain analytical quality. However, it is difficult to measure assay quality objectively and quantitatively. The Sigma metric is a single number that estimates quality based on the traditional parameters used in the clinical laboratory: allowable total error (TEa), precision and bias. In this study, Sigma metrics were calculated for 41 clinical chemistry assays for serum and urine on five ARCHITECT c16000 chemistry analyzers. Controls at two analyte concentrations were tested and Sigma metrics were calculated using three different TEa targets (Ricos biological variability, CLIA, and RiliBÄK). Sigma metrics varied with analyte concentration, the TEa target, and between/among analyzers. Sigma values identified those assays that are analytically robust and require minimal quality control rules and those that exhibit more variability and require more complex rules. The analyzer to analyzer variability was assessed on the basis of Sigma metrics. Six Sigma is a more efficient way to control quality, but the lack of TEa targets for many analytes and the sometimes inconsistent TEa targets from different sources are important variables for the interpretation and the application of Sigma metrics in a routine clinical laboratory. Sigma metrics are a valuable means of comparing the analytical quality of two or more analyzers to ensure the comparability of patient test results.
Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J.; Murtha, Michael T.; Hus, Vanessa; Lowe, Jennifer K.; Willsey, A. Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W.; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E.; Ledbetter, David H.; Lord, Catherine; Mane, Shrikant M.; Martin, Christa Lese; Martin, Donna M.; Morrow, Eric M.; Walsh, Christopher A.; Sutcliffe, James S.; State, Matthew W.; Devlin, Bernie; Cook, Edwin H.; Kim, Soo-Jeong
2013-01-01
BACKGROUND Brain development follows a different trajectory in children with Autism Spectrum Disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. METHODS We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. RESULTS Gender, age, height, weight, genetic ancestry and ASD status were significant predictors of HC (estimate of the ASD effect=0.2cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. CONCLUSIONS Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait and population norms for HC would be far more accurate if covariates including genetic ancestry, height and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. PMID:23746936
Infrared Spectra and Band Strengths of CH3SH, an Interstellar Molecule
NASA Technical Reports Server (NTRS)
Hudson, R. L.
2016-01-01
Three solid phases of CH3SH (methanethiol or methyl mercaptan) have been prepared and their mid-infrared spectra recorded at 10-110 degrees Kelvin, with an emphasis on the 17-100 degrees Kelvin region. Refractive indices have been measured at two temperatures and used to estimate ice densities and infrared band strengths. Vapor pressures for the two crystalline phases of CH3SH at 110 degrees Kelvin are estimated. The behavior of amorphous CH3SH on warming is presented and discussed in terms of Ostwald's step rule. Comparisons to CH3OH under similar conditions are made, and some inconsistencies and ambiguities in the CH3SH literature are examined and corrected.
USDA-ARS?s Scientific Manuscript database
We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
[School absenteeism: Preliminary developments and maintaining persisting challenges].
Lenzen, Christoph; Brunner, Romuald; Resch, Franz
2016-01-01
A first step when considering school absenteeism is to understand the meaning and definition of the term. School absenteeism encompasses several terms such as school refusal, truancy and school phobia, all of which have been used inconsistently and confusingly in the past. Furthermore, the question of how many days of absence can be seen as problematic remains unclear. Due to these definitional problems, available data is inconsistent. Therefore, the prevalence rates of school absenteeism can only be estimated (about 5 % of all students). School absenteeism affects not only individual students, but also family, school and society structures. In order to establish appropriate support and intervention programs, a multimodal as well as an individual approach should be considered to address this interdependency. The primary goal, however, should be the students’ resumption of a regular school attendance, which requires a strong cooperation between parents, schools, youth welfare services and psychotherapeutic offers. If therapeutic interventions are required, it is highly recommended to start with outpatient treatment. If school attendance still remains irregular an inpatient treatment should follow.
NASA Astrophysics Data System (ADS)
Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.
2017-12-01
Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.
Consistency problems associated to the improvement of precession-nutation theories
NASA Astrophysics Data System (ADS)
Ferrandiz, J. M.; Escapa, A.; Baenas, T.; Getino, J.; Navarro, J. F.; Belda, S.
2014-12-01
The complexity of the modelling of the rotational motion of the Earth in space has produced that no single theory has been adopted to describe it in full. Hence, it is customary using at least a theory for precession and another one for nutation. The classic approach proceeds by deriving some of the fundamentals parameters from the precession theory at hand, like, e.g. the dynamical ellipticity H, and then using that valuesin the nutation theory. The former IAU1976 precession and IAU1980 nutation theories followed that scheme. Along with the improvement of the accuracy of the determination of EOP (Earth orientation parameters), IAU1980 was superseded by IAU2000, based on the application of the MHB2000 (Mathews et al 2002) transfer function to the previous rigid earth analytical theory REN2000 (Souchay et al 1999). The latter was derived while the precession model IAU1976 was still in force therefore it used the corresponding values for some of the fundamental parameters, as the precession rate, associated to the dynamical ellipticity, and the obliquity of the ecliptic at the reference epoch. The new precession model P03 was adopted as IAU2006. That change introduced some inconsistency since P03 used different values for some of the fundamental parameters that MHB2000 inherited from REN2000. Besides, the derivation of the basic earth parameters of MHB2000 itself comprised a fitted variation of the dynamical ellipticity adopted in the background rigid theory. Due to the strict requirements of accuracy of the present and coming times, the magnitude of the inconsistencies originated by this two-fold approach is no longer negligible as earlier. Some corrections have been proposed by Capitaine et al (2005) and Escapa et al (2014) in order to reach a better level of consistency between precession and nutation theories and parameters. In this presentation we revisit the problem taking into account some of the advances in precession theory not accounted for yet, stemming from the non-rigid nature of the Earth. Special attention is paid to the assessment of the level of consistency between the current IAU precession and nutation models and its impact on the adopted reference values. We suggest potential corrections and possibilities to incorporate theoretical advances and improve accuracy while being compliant with IAU resolutions.
Inconsistency of residents' communication performance in challenging consultations.
Wouda, Jan C; van de Wiel, Harry B M
2013-12-01
Communication performance inconsistency between consultations is usually regarded as a measurement error that jeopardizes the reliability of assessments. However, inconsistency is an important phenomenon, since it indicates that physicians' communication may be below standard in some consultations. Fifty residents performed two challenging consultations. Residents' communication competency was assessed with the CELI instrument. Residents' background in communication skills training (CST) was also established. We used multilevel analysis to explore communication performance inconsistency between the two consultations. We also established the relationships between inconsistency and average performance quality, the type of consultation, and CST background. Inconsistency accounted for 45.5% of variance in residents' communication performance. Inconsistency was dependent on the type of consultation. The effect of CST background training on performance quality was case specific. Inconsistency and average performance quality were related for those consultation combinations dissimilar in goals, structure, and required skills. CST background had no effect on inconsistency. Physician communication performance should be of high quality, but also consistent regardless of the type and complexity of the consultation. In order to improve performance quality and reduce performance inconsistency, communication education should offer ample opportunities to practice a wide variety of challenging consultations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies and Incidence Time Series
Li, Lucy M.; Grassly, Nicholas C.; Fraser, Christophe
2017-01-01
Abstract Heterogeneity in individual-level transmissibility can be quantified by the dispersion parameter k of the offspring distribution. Quantifying heterogeneity is important as it affects other parameter estimates, it modulates the degree of unpredictability of an epidemic, and it needs to be accounted for in models of infection control. Aggregated data such as incidence time series are often not sufficiently informative to estimate k. Incorporating phylogenetic analysis can help to estimate k concurrently with other epidemiological parameters. We have developed an inference framework that uses particle Markov Chain Monte Carlo to estimate k and other epidemiological parameters using both incidence time series and the pathogen phylogeny. Using the framework to fit a modified compartmental transmission model that includes the parameter k to simulated data, we found that more accurate and less biased estimates of the reproductive number were obtained by combining epidemiological and phylogenetic analyses. However, k was most accurately estimated using pathogen phylogeny alone. Accurately estimating k was necessary for unbiased estimates of the reproductive number, but it did not affect the accuracy of reporting probability and epidemic start date estimates. We further demonstrated that inference was possible in the presence of phylogenetic uncertainty by sampling from the posterior distribution of phylogenies. Finally, we used the inference framework to estimate transmission parameters from epidemiological and genetic data collected during a poliovirus outbreak. Despite the large degree of phylogenetic uncertainty, we demonstrated that incorporating phylogenetic data in parameter inference improved the accuracy and precision of estimates. PMID:28981709
Psychology of developing and designing expert systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonn, B.; MacGregor, D.
This paper discusses psychological problems relevant to developing and designing expert systems. With respect to the former, the psychological literature suggests that several cognitive biases may affect the elicitation of a valid knowledge base from the expert. The literature also suggests that common expert system inference engines may be quite inconsistent with reasoning heuristics employed by experts. With respect to expert system user interfaces, care should be taken when eliciting uncertainty estimates from users, presenting system conclusions, and ordering questions.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Coley, Rebecca Yates; Browna, Elizabeth R.
2016-01-01
Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051
Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline
NASA Technical Reports Server (NTRS)
Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor
2010-01-01
Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.
Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.
Munir, Mohammad
2018-06-01
Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.
Liang, Yuzhen; Torralba-Sanchez, Tifany L; Di Toro, Dominic M
2018-04-18
Polyparameter Linear Free Energy Relationships (pp-LFERs) using Abraham system parameters have many useful applications. However, developing the Abraham system parameters depends on the availability and quality of the Abraham solute parameters. Using Quantum Chemically estimated Abraham solute Parameters (QCAP) is shown to produce pp-LFERs that have lower root mean square errors (RMSEs) of predictions for solvent-water partition coefficients than parameters that are estimated using other presently available methods. pp-LFERs system parameters are estimated for solvent-water, plant cuticle-water systems, and for novel compounds using QCAP solute parameters and experimental partition coefficients. Refitting the system parameter improves the calculation accuracy and eliminates the bias. Refitted models for solvent-water partition coefficients using QCAP solute parameters give better results (RMSE = 0.278 to 0.506 log units for 24 systems) than those based on ABSOLV (0.326 to 0.618) and QSPR (0.294 to 0.700) solute parameters. For munition constituents and munition-like compounds not included in the calibration of the refitted model, QCAP solute parameters produce pp-LFER models with much lower RMSEs for solvent-water partition coefficients (RMSE = 0.734 and 0.664 for original and refitted model, respectively) than ABSOLV (4.46 and 5.98) and QSPR (2.838 and 2.723). Refitting plant cuticle-water pp-LFER including munition constituents using QCAP solute parameters also results in lower RMSE (RMSE = 0.386) than that using ABSOLV (0.778) and QSPR (0.512) solute parameters. Therefore, for fitting a model in situations for which experimental data exist and system parameters can be re-estimated, or for which system parameters do not exist and need to be developed, QCAP is the quantum chemical method of choice.
NASA Technical Reports Server (NTRS)
Easterbrook, Steve
1996-01-01
This position paper argues that inconsistencies that occur during the development of a software specification offer an excellent way of learning more about the development process. We base this argument on our work on inconsistency management. Much attention has been devoted recently to the need to allow inconsistencies to occur during software development, to facilitate flexible development strategies, especially for collaborative work. Recent work has concentrated on reasoning in the presence of inconsistency, tracing inconsistencies with 'pollution markers' and supporting resolution. We argue here that one of the most important aspects of inconsistency is the learning opportunity it provides. We are therefore concerned with how to capture this learning outcome so that its significance is not lost. We present a small example of how apprentice software engineers learn from their mistakes, and outline how an inconsistency management tool could support this learning. We then argue that the approach can be used more generally as part of continuous process improvement.
Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly
ERIC Educational Resources Information Center
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.
2013-01-01
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo
2015-05-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process
NASA Astrophysics Data System (ADS)
Nakanishi, W.; Fuse, T.; Ishikawa, T.
2015-05-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo
2015-01-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Revised budget for the oceanic uptake of anthropogenic carbon dioxide
Sarmiento, J.L.; Sundquist, E.T.
1992-01-01
TRACER-CALIBRATED models of the total uptake of anthropogenic CO2 by the world's oceans give estimates of about 2 gigatonnes carbon per year1, significantly larger than a recent estimate2 of 0.3-0.8 Gt C yr-1 for the synoptic air-to-sea CO2 influx. Although both estimates require that the global CO2 budget must be balanced by a large unknown terrestrial sink, the latter estimate implies a much larger terrestrial sink, and challenges the ocean model calculations on which previous CO2 budgets were based. The discrepancy is due in part to the net flux of carbon to the ocean by rivers and rain, which must be added to the synoptic air-to-sea CO2 flux to obtain the total oceanic uptake of anthropogenic CO2. Here we estimate the magnitude of this correction and of several other recently proposed adjustments to the synoptic air-sea CO2 exchange. These combined adjustments minimize the apparent inconsistency, and restore estimates of the terrestrial sink to values implied by the modelled oceanic uptake.
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
NASA Astrophysics Data System (ADS)
Mushtak, V. C.; Williams, E. R.
2011-12-01
Among the palette of methods (satellite, VLF, ELF) for monitoring global lightning activity, observations of the background Schumann resonances (SR) provide a unique prospect for estimating the integrated activity of global lightning activity in absolute units (coul2 km2/sec). This prospect is ensured by the SR waves' low attenuation, with wavelengths commensurate with the dimensions of dominant regional lightning "chimneys", and by the accumulating methodology for background SR techniques. Another benefit is the reduction of SR measurements into a compact set of resonance characteristics (modal frequencies, intensities, and quality factors). Suggested and tested in numerical simulations by T.R. Madden in the 1960s, the idea to invert the SR characteristics for the global lightning source has been farther developed, statistically substantiated, and practically realized here on the basis of the computing power and the quantity of experimental material way beyond what the SR pioneers had at their disposal. The critical issue of the quality of the input SR parameters is addressed by implementing a statistically substantiated sanitizing procedure to dispose of the fragments of the observed time series containing unrepresentative elements - local interference of various origin and strong ELF transients originating outside the major "chimneys" represented in the source model. As a result of preliminary research, a universal empirical sanitizing criterion has been established. Due to the fact that the actual observations have been collected from a set of individually organized ELF stations with various equipment sets and calibration techniques, the relative parameters in both input (the intensities) and output (the "chimney" activities) are being used as far as possible in the inversion process to avoid instabilities caused by calibration inconsistencies. The absolute regional activities - and so the sought for global activity in absolute units - is determined in the final stage from the estimated positions and relative activities of the modeled "chimneys" using SR power spectra at the stations with the most reliable calibrations. Additional stabilization in the procedure has been achieved by exploiting the Le Come/Goltzman inversion algorithm that uses the empirically estimated statistical characteristics of the input parameters. When applied to electric and/or magnetic observations collected simultaneously in January 2009 from six ELF stations in Poland (Belsk), Japan (Moshiri), Hungary (Nagycenk), USA (Rhode Island), India (Shillong), and Antarctica (Syowa), the inversion procedure reveals a general repeatability of diurnal lightning scenarios with variations of "chimney" centroid locations by a few megameters, while the estimated regional activity has been found to vary from day to day by up to several tens of percent. A combined empirical-theoretical analysis of the collected data aimed at selecting the most reliably calibrated ELF stations is presently in progress. All the effort is being made to transform the relative lightning activity into absolute units by the time of this meeting. The authors are greatly thankful to all the experimentalists who generously provided their observations and related information for this study.
Foster, A.L.; Munk, L.; Koski, R.A.; Shanks, Wayne C.; Stillings, L.L.
2008-01-01
The relations among geochemical parameters and sediment microbial communities were examined at three shoreline sites in the Prince William Sound, Alaska, which display varying degrees of impact by acid-rock drainage (ARD) associated with historic mining of volcanogenic massive sulfide deposits. Microbial communities were examined using total fatty acid methyl esters (FAMEs), a class of compounds derived from lipids produced by eukaryotes and prokaryotes (bacteria and Archaea); standard extraction techniques detect FAMEs from both living (viable) and dead (non-viable) biomass, but do not detect Archaeal FAMEs. Biomass and diversity (as estimated by FAMEs) varied strongly as a function of position in the tidal zone, not by study site; subtidal muds, Fe oxyhydroxide undergoing biogenic reductive dissolution, and peat-rich intertidal sediment had the highest values. These estimates were lowest in acid-generating, intertidal zone sediment; if valid, the estimates suggest that only one or two bacterial species predominate in these communities, and/or that Archeal species are important members of the microbial community in this sediment. All samples were dominated by bacterial FAMEs (median value >90%). Samples with the highest absolute abundance of eukaryotic FAMEs were biogenic Fe oxyhydroxides from shallow freshwater pools (fungi) and subtidal muds (diatoms). Eukaryotic FAMEs were practically absent from low-pH, sulfide-rich intertidal zone sediments. The relative abundance of general microbial functional groups such as aerobes/anaerobes and gram(+)/gram(-) was not estimated due to severe inconsistency among the results obtained using several metrics reported in the literature. Principal component analyses (PCAs) were performed to investigate the relationship among samples as separate functions of water, sediment, and FAMEs data. PCAs based on water chemistry and FAMEs data resulted in similar relations among samples, whereas the PCA based on sediment chemistry produced a very different sample arrangement. Specifically, the sediment parameter PCA grouped samples with high bulk trace metal concentration regardless of whether the metals were incorporated into secondary precipitates or primary sulfides. The water chemistry PCA and FAMEs PCA appear to be less prone to this type of artifact. Signature lipids in sulfide-rich sediments could indicate the presence of acid-tolerant and/or acidophilic members of the genus Thiobacillus or they could indicate the presence of SO4-reducing bacteria. The microbial community documented in subtidal and offshore sediments is rich in SRB and/or facultative anaerobes of the Cytophaga-Flavobacterium group; both could reasonably be expected in PWS coastal environments. The results of this study provide evidence for substantial feedback between local (meter to centimeter-scale) geochemical variations, and sediment microbial community composition, and show that microbial community signatures in the intertidal zone are significantly altered at sites where ARD drainage is present relative to sites where it is not, even if the sediment geochemistry indicates net accumulation of ARD-generated trace metals in the intertidal zone. ?? 2007 Elsevier Ltd. All rights reserved.
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment
DeBlasio, Dan
2013-01-01
Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Wittayachamnankul, Borwon; Chentanakij, Boriboon; Sruamsiri, Kamphee; Chattipakorn, Nipon
2016-12-01
The current practice in treatment of severe sepsis and septic shock is to ensure adequate oxygenation and perfusion in patients, along with prompt administration of antibiotics, within 6 hours from diagnosis, which is considered the "golden hour" for the patients. One of the goals of treatment is to restore normal tissue perfusion. With this goal in mind, some parameters have been used to determine the success of treatment and mortality rate; however, none has been proven to be the best predictor of mortality rate in sepsis patients. Despite growing evidence regarding the prognostic indicators for mortality in sepsis patients, inconsistent reports exist. This review comprehensively summarizes the reports regarding the frequently used parameters in sepsis including central venous oxygen saturation, blood lactate, and central venous-to-arterial carbon dioxide partial pressure difference, as prognostic indicators for clinical outcomes in sepsis patients. Moreover, consistent findings and inconsistent reports for their pathophysiology and the potential mechanisms for their use as well as their limitations in sepsis patients are presented and discussed. Finally, a schematic strategy for potential management and benefits in sepsis patients is proposed based upon these current available data. There is currently no ideal biomarker that can indicate prognosis, predict progression of the disease, and guide treatment in sepsis. Further studies are needed to be carried out to identify the ideal biomarker that has all the desired properties. Copyright © 2016 Elsevier Inc. All rights reserved.
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
Reconstructing the hidden states in time course data of stochastic models.
Zimmer, Christoph
2015-11-01
Parameter estimation is central for analyzing models in Systems Biology. The relevance of stochastic modeling in the field is increasing. Therefore, the need for tailored parameter estimation techniques is increasing as well. Challenges for parameter estimation are partial observability, measurement noise, and the computational complexity arising from the dimension of the parameter space. This article extends the multiple shooting for stochastic systems' method, developed for inference in intrinsic stochastic systems. The treatment of extrinsic noise and the estimation of the unobserved states is improved, by taking into account the correlation between unobserved and observed species. This article demonstrates the power of the method on different scenarios of a Lotka-Volterra model, including cases in which the prey population dies out or explodes, and a Calcium oscillation system. Besides showing how the new extension improves the accuracy of the parameter estimates, this article analyzes the accuracy of the state estimates. In contrast to previous approaches, the new approach is well able to estimate states and parameters for all the scenarios. As it does not need stochastic simulations, it is of the same order of speed as conventional least squares parameter estimation methods with respect to computational time. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Wagner, Brian J.; Gorelick, Steven M.
1986-01-01
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
Estimation of teleported and gained parameters in a non-inertial frame
NASA Astrophysics Data System (ADS)
Metwally, N.
2017-04-01
Quantum Fisher information is introduced as a measure of estimating the teleported information between two users, one of which is uniformly accelerated. We show that the final teleported state depends on the initial parameters, in addition to the gained parameters during the teleportation process. The estimation degree of these parameters depends on the value of the acceleration, the used single mode approximation (within/beyond), the type of encoded information (classic/quantum) in the teleported state, and the entanglement of the initial communication channel. The estimation degree of the parameters can be maximized if the partners teleport classical information.
Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.
Yuan, Haidong
2016-10-14
Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.
Sun, Xiaodian; Jin, Li; Xiong, Momiao
2008-01-01
It is system dynamics that determines the function of cells, tissues and organisms. To develop mathematical models and estimate their parameters are an essential issue for studying dynamic behaviors of biological systems which include metabolic networks, genetic regulatory networks and signal transduction pathways, under perturbation of external stimuli. In general, biological dynamic systems are partially observed. Therefore, a natural way to model dynamic biological systems is to employ nonlinear state-space equations. Although statistical methods for parameter estimation of linear models in biological dynamic systems have been developed intensively in the recent years, the estimation of both states and parameters of nonlinear dynamic systems remains a challenging task. In this report, we apply extended Kalman Filter (EKF) to the estimation of both states and parameters of nonlinear state-space models. To evaluate the performance of the EKF for parameter estimation, we apply the EKF to a simulation dataset and two real datasets: JAK-STAT signal transduction pathway and Ras/Raf/MEK/ERK signaling transduction pathways datasets. The preliminary results show that EKF can accurately estimate the parameters and predict states in nonlinear state-space equations for modeling dynamic biochemical networks. PMID:19018286
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
Wardell, Jeffrey D.; Rogers, Michelle L.; Simms, Leonard J.; Jackson, Kristina M.; Read, Jennifer P.
2014-01-01
This study investigated inconsistent responding to survey items by participants involved in longitudinal, web-based substance use research. We also examined cross-sectional and prospective predictors of inconsistent responding. Middle school (N = 1,023) and college students (N = 995) from multiple sites in the United States responded to online surveys assessing substance use and related variables in three waves of data collection. We applied a procedure for creating an index of inconsistent responding at each wave that involved identifying pairs of items with considerable redundancy and calculating discrepancies in responses to these items. Inconsistent responding was generally low in the Middle School sample and moderate in the College sample, with individuals showing only modest stability in inconsistent responding over time. Multiple regression analyses identified several baseline variables—including demographic, personality, and behavioral variables—that were uniquely associated with inconsistent responding both cross-sectionally and prospectively. Alcohol and substance involvement showed some bivariate associations with inconsistent responding, but these associations largely were accounted for by other factors. The results suggest that high levels of carelessness or inconsistency do not appear to characterize participants’ responses to longitudinal web-based surveys of substance use and support the use of inconsistency indices as a tool for identifying potentially problematic responders. PMID:24092819
Renbaum-Wolff, Lindsay; Song, Mijung; Marcolli, Claudia; ...
2016-07-01
Particles consisting of secondary organic material (SOM) are abundant in the atmosphere. In order to predict the role of these particles in climate, visibility and atmospheric chemistry, information on particle phase state (i.e., single liquid, two liquids and solid) is needed. Our paper focuses on the phase state of SOM particles free of inorganic salts produced by the ozonolysis of α-pinene. Phase transitions were investigated in the laboratory using optical microscopy and theoretically using a thermodynamic model at 290 K and for relative humidities ranging from < 0.5 to 100%. In the laboratory studies, a single phase was observed frommore » 0 to 95% relative humidity (RH) while two liquid phases were observed above 95% RH. For increasing RH, the mechanism of liquid–liquid phase separation (LLPS) was spinodal decomposition. The RH range over which two liquid phases were observed did not depend on the direction of RH change. In the modeling studies, the SOM took up very little water and was a single organic-rich phase at low RH values. At high RH, the SOM underwent LLPS to form an organic-rich phase and a water-rich phase, consistent with the laboratory studies. The presence of LLPS at high RH values can have consequences for the cloud condensation nuclei (CCN) activity of SOM particles. In the simulated Köhler curves for SOM particles, two local maxima were observed. Depending on the composition of the SOM, the first or second maximum can determine the critical supersaturation for activation. Recently researchers have observed inconsistencies between measured CCN properties of SOM particles and hygroscopic growth measured below water saturation (i.e., hygroscopic parameters measured below water saturation were inconsistent with hygroscopic parameters measured above water saturation). Furthermore, the work presented here illustrates that such inconsistencies are expected for systems with LLPS when the water uptake at subsaturated conditions represents the hygroscopicity of an organic-rich phase while the barrier for CCN activation can be determined by the second maximum in the Köhler curve when the particles are water rich.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renbaum-Wolff, Lindsay; Song, Mijung; Marcolli, Claudia
Particles consisting of secondary organic material (SOM) are abundant in the atmosphere. In order to predict the role of these particles in climate, visibility and atmospheric chemistry, information on particle phase state (i.e., single liquid, two liquids and solid) is needed. Our paper focuses on the phase state of SOM particles free of inorganic salts produced by the ozonolysis of α-pinene. Phase transitions were investigated in the laboratory using optical microscopy and theoretically using a thermodynamic model at 290 K and for relative humidities ranging from < 0.5 to 100%. In the laboratory studies, a single phase was observed frommore » 0 to 95% relative humidity (RH) while two liquid phases were observed above 95% RH. For increasing RH, the mechanism of liquid–liquid phase separation (LLPS) was spinodal decomposition. The RH range over which two liquid phases were observed did not depend on the direction of RH change. In the modeling studies, the SOM took up very little water and was a single organic-rich phase at low RH values. At high RH, the SOM underwent LLPS to form an organic-rich phase and a water-rich phase, consistent with the laboratory studies. The presence of LLPS at high RH values can have consequences for the cloud condensation nuclei (CCN) activity of SOM particles. In the simulated Köhler curves for SOM particles, two local maxima were observed. Depending on the composition of the SOM, the first or second maximum can determine the critical supersaturation for activation. Recently researchers have observed inconsistencies between measured CCN properties of SOM particles and hygroscopic growth measured below water saturation (i.e., hygroscopic parameters measured below water saturation were inconsistent with hygroscopic parameters measured above water saturation). Furthermore, the work presented here illustrates that such inconsistencies are expected for systems with LLPS when the water uptake at subsaturated conditions represents the hygroscopicity of an organic-rich phase while the barrier for CCN activation can be determined by the second maximum in the Köhler curve when the particles are water rich.« less
ERIC Educational Resources Information Center
Xu, Xueli; Jia, Yue
2011-01-01
Estimation of item response model parameters and ability distribution parameters has been, and will remain, an important topic in the educational testing field. Much research has been dedicated to addressing this task. Some studies have focused on item parameter estimation when the latent ability was assumed to follow a normal distribution,…
ERIC Educational Resources Information Center
Gugel, John F.
A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
Gamble, John F; Nicolich, Mark J; Boffetta, Paolo
2012-08-01
A recent review concluded that the evidence from epidemiology studies was indeterminate and that additional studies were required to support the diesel exhaust-lung cancer hypothesis. This updated review includes seven recent studies. Two population-based studies concluded that significant exposure-response (E-R) trends between cumulative diesel exhaust and lung cancer were unlikely to be entirely explained by bias or confounding. Those studies have quality data on life-style risk factors, but do not allow definitive conclusions because of inconsistent E-R trends, qualitative exposure estimates and exposure misclassification (insufficient latency based on job title), and selection bias from low participation rates. Non-definitive results are consistent with the larger body of population studies. An NCI/NIOSH cohort mortality and nested case-control study of non-metal miners have some surrogate-based quantitative diesel exposure estimates (including highest exposure measured as respirable elemental carbon (REC) in the workplace) and smoking histories. The authors concluded that diesel exhaust may cause lung cancer. Nonetheless, the results are non-definitive because the conclusions are based on E-R patterns where high exposures were deleted to achieve significant results, where a posteriori adjustments were made to augment results, and where inappropriate adjustments were made for the "negative confounding" effects of smoking even though current smoking was not associated with diesel exposure and therefore could not be a confounder. Three cohort studies of bus drivers and truck drivers are in effect air pollution studies without estimates of diesel exhaust exposure and so are not sufficient for assessing the lung cancer-diesel exhaust hypothesis. Results from all occupational cohort studies with quantitative estimates of exposure have limitations, including weak and inconsistent E-R associations that could be explained by bias, confounding or chance, exposure misclassification, and often inadequate latency. In sum, the weight of evidence is considered inadequate to confirm the diesel-lung cancer hypothesis.
Gamble, John F.; Nicolich, Mark J.; Boffetta, Paolo
2012-01-01
A recent review concluded that the evidence from epidemiology studies was indeterminate and that additional studies were required to support the diesel exhaust-lung cancer hypothesis. This updated review includes seven recent studies. Two population-based studies concluded that significant exposure-response (E-R) trends between cumulative diesel exhaust and lung cancer were unlikely to be entirely explained by bias or confounding. Those studies have quality data on life-style risk factors, but do not allow definitive conclusions because of inconsistent E-R trends, qualitative exposure estimates and exposure misclassification (insufficient latency based on job title), and selection bias from low participation rates. Non-definitive results are consistent with the larger body of population studies. An NCI/NIOSH cohort mortality and nested case-control study of non-metal miners have some surrogate-based quantitative diesel exposure estimates (including highest exposure measured as respirable elemental carbon (REC) in the workplace) and smoking histories. The authors concluded that diesel exhaust may cause lung cancer. Nonetheless, the results are non-definitive because the conclusions are based on E-R patterns where high exposures were deleted to achieve significant results, where a posteriori adjustments were made to augment results, and where inappropriate adjustments were made for the “negative confounding” effects of smoking even though current smoking was not associated with diesel exposure and therefore could not be a confounder. Three cohort studies of bus drivers and truck drivers are in effect air pollution studies without estimates of diesel exhaust exposure and so are not sufficient for assessing the lung cancer-diesel exhaust hypothesis. Results from all occupational cohort studies with quantitative estimates of exposure have limitations, including weak and inconsistent E-R associations that could be explained by bias, confounding or chance, exposure misclassification, and often inadequate latency. In sum, the weight of evidence is considered inadequate to confirm the diesel-lung cancer hypothesis. PMID:22656672
Capturing spatial and temporal patterns of widespread, extreme flooding across Europe
NASA Astrophysics Data System (ADS)
Busby, Kathryn; Raven, Emma; Liu, Ye
2013-04-01
Statistical characterisation of physical hazards is an integral part of probabilistic catastrophe models used by the reinsurance industry to estimate losses from large scale events. Extreme flood events are not restricted by country boundaries which poses an issue for reinsurance companies as their exposures often extend beyond them. We discuss challenges and solutions that allow us to appropriately capture the spatial and temporal dependence of extreme hydrological events on a continental-scale, which in turn enables us to generate an industry-standard stochastic event set for estimating financial losses for widespread flooding. By presenting our event set methodology, we focus on explaining how extreme value theory (EVT) and dependence modelling are used to account for short, inconsistent hydrological data from different countries, and how to make appropriate statistical decisions that best characterise the nature of flooding across Europe. The consistency of input data is of vital importance when identifying historical flood patterns. Collating data from numerous sources inherently causes inconsistencies and we demonstrate our robust approach to assessing the data and refining it to compile a single consistent dataset. This dataset is then extrapolated using a parameterised EVT distribution to estimate extremes. Our method then captures the dependence of flood events across countries using an advanced multivariate extreme value model. Throughout, important statistical decisions are explored including: (1) distribution choice; (2) the threshold to apply for extracting extreme data points; (3) a regional analysis; (4) the definition of a flood event, which is often linked with reinsurance industry's hour's clause; and (5) handling of missing values. Finally, having modelled the historical patterns of flooding across Europe, we sample from this model to generate our stochastic event set comprising of thousands of events over thousands of years. We then briefly illustrate how this is applied within a probabilistic model to estimate catastrophic loss curves used by the reinsurance industry.
Physical activity patterns among Latinos in the United States: putting the pieces together.
Ham, Sandra A; Yore, Michelle M; Kruger, Judy; Heath, Gregory W; Moeti, Refilwe
2007-10-01
Estimates of participation in physical activity among Latinos are inconsistent across studies. To obtain better estimates and examine possible reasons for inconsistencies, we assessed 1) patterns of participation in various categories of physical activity among Latino adults, 2) changes in their activity patterns with acculturation, and 3) variations in their activity patterns by region of origin. Using data from four national surveillance systems (the National Health and Nutrition Examination Survey, 1999-2002; the Behavioral Risk Factor Surveillance System, 2003; the National Household Travel Survey, 2001; and the National Health Interview Survey Cancer Supplement, 2000), we estimated the percentage of Latinos who participated at least once per week in leisure-time, household, occupational, or transportation-related physical activity, as well as in an active pattern of usual daily activity. We reported prevalences by acculturation measures and region of origin. The percentage of Latinos who participated in the various types of physical activity ranged from 28.7% for having an active level of usual daily activity (usually walking most of the day and usually carrying or lifting objects) to 42.8% for participating in leisure-time physical activity at least once per week. The percentage who participated in leisure-time and household activities increased with acculturation, whereas the percentage who participated in occupational and transportation-related activities decreased with acculturation. Participation in an active level of usual daily activity did not change significantly. The prevalence of participation in transportation-related physical activity and of an active level of usual daily activity among Latino immigrants varied by region of origin. Physical activity patterns among Latinos vary with acculturation and region of origin. To assess physical activity levels in Latino communities, researchers should measure all types of physical activity and the effects of acculturation on each type of activity.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
ERIC Educational Resources Information Center
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Mahama, Ayisha Matuamo; Anaman, Kwabena Asomanin; Osei-Akoto, Isaac
2014-06-01
We analysed householders' access to improved water for drinking and other domestic uses in five selected low-income urban areas of Accra, Ghana using a survey of 1,500 households. Our definitions of improved water were different from those suggested by the World Health Organization (WHO). The results revealed that only 4.4% of the respondents had access to improved drinking water compared to 40.7% using the WHO definition. However, 88.7% of respondents had access to improved water for domestic uses compared to 98.3% using the WHO definition. Using logistic regression analysis, we established that the significant determinant of householders' access to improved drinking water was income. However, for access to improved water for other domestic uses, the significant factors were education, income and location of the household. Compared to migrants, indigenous people and people from mixed areas were less likely to have access to improved water for other domestic purposes. For the analysis using the WHO definitions, most of the independent variables were not statistically significant in determining householders' access, and those variables that were significant generated parameter estimates inconsistent with evidence from the literature and anecdotal evidence from officials of public health and water supply companies in Ghana.
High-temperature thermal destruction of poultry derived wastes for energy recovery in Australia.
Florin, N H; Maddocks, A R; Wood, S; Harris, A T
2009-04-01
The high-temperature thermal destruction of poultry derived wastes (e.g., manure and bedding) for energy recovery is viable in Australia when considering resource availability and equivalent commercial-scale experience in the UK. In this work, we identified and examined the opportunities and risks associated with common thermal destruction techniques, including: volume of waste, costs, technological risks and environmental impacts. Typical poultry waste streams were characterised based on compositional analysis, thermodynamic equilibrium modelling and non-isothermal thermogravimetric analysis coupled with mass spectrometry (TG-MS). Poultry waste is highly variable but otherwise comparable with other biomass fuels. The major technical and operating challenges are associated with this variability in terms of: moisture content, presence of inorganic species and type of litter. This variability is subject to a range of parameters including: type and age of bird, and geographical and seasonal inconsistencies. There are environmental and health considerations associated with combustion and gasification due to the formation of: NO(X), SO(X), H(2)S and HCl gas. Mitigation of these emissions is achievable through correct plant design and operation, however, with significant economic penalty. Based on our analysis and literature data, we present cost estimates for generic poultry-waste-fired power plants with throughputs of 2 and 8 tonnes/h.
NASA Astrophysics Data System (ADS)
Patil, Prataprao; Vyasarayani, C. P.; Ramji, M.
2017-06-01
In this work, digital photoelasticity technique is used to estimate the crack tip fracture parameters for different crack configurations. Conventionally, only isochromatic data surrounding the crack tip is used for SIF estimation, but with the advent of digital photoelasticity, pixel-wise availability of both isoclinic and isochromatic data could be exploited for SIF estimation in a novel way. A linear least square approach is proposed to estimate the mixed-mode crack tip fracture parameters by solving the multi-parameter stress field equation. The stress intensity factor (SIF) is extracted from those estimated fracture parameters. The isochromatic and isoclinic data around the crack tip is estimated using the ten-step phase shifting technique. To get the unwrapped data, the adaptive quality guided phase unwrapping algorithm (AQGPU) has been used. The mixed mode fracture parameters, especially SIF are estimated for specimen configurations like single edge notch (SEN), center crack and straight crack ahead of inclusion using the proposed algorithm. The experimental SIF values estimated using the proposed method are compared with analytical/finite element analysis (FEA) results, and are found to be in good agreement.
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.
Parameter estimation of kinetic models from metabolic profiles: two-phase dynamic decoupling method.
Jia, Gengjie; Stephanopoulos, Gregory N; Gunawan, Rudiyanto
2011-07-15
Time-series measurements of metabolite concentration have become increasingly more common, providing data for building kinetic models of metabolic networks using ordinary differential equations (ODEs). In practice, however, such time-course data are usually incomplete and noisy, and the estimation of kinetic parameters from these data is challenging. Practical limitations due to data and computational aspects, such as solving stiff ODEs and finding global optimal solution to the estimation problem, give motivations to develop a new estimation procedure that can circumvent some of these constraints. In this work, an incremental and iterative parameter estimation method is proposed that combines and iterates between two estimation phases. One phase involves a decoupling method, in which a subset of model parameters that are associated with measured metabolites, are estimated using the minimization of slope errors. Another phase follows, in which the ODE model is solved one equation at a time and the remaining model parameters are obtained by minimizing concentration errors. The performance of this two-phase method was tested on a generic branched metabolic pathway and the glycolytic pathway of Lactococcus lactis. The results showed that the method is efficient in getting accurate parameter estimates, even when some information is missing.
Quadratic semiparametric Von Mises calculus
Robins, James; Li, Lingling; Tchetgen, Eric
2009-01-01
We discuss a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on U-statistics constructed from quadratic influence functions. The latter extend ordinary linear influence functions of the parameter of interest as defined in semiparametric theory, and represent second order derivatives of this parameter. For parameters for which the matching cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than n–1/2-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at n–1/2-rate. PMID:23087487
47 CFR 73.3518 - Inconsistent or conflicting applications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Inconsistent or conflicting applications. 73... SERVICES RADIO BROADCAST SERVICES Rules Applicable to All Broadcast Stations § 73.3518 Inconsistent or conflicting applications. While an application is pending and undecided, no subsequent inconsistent or...
Estimation of the ARNO model baseflow parameters using daily streamflow data
NASA Astrophysics Data System (ADS)
Abdulla, F. A.; Lettenmaier, D. P.; Liang, Xu
1999-09-01
An approach is described for estimation of baseflow parameters of the ARNO model, using historical baseflow recession sequences extracted from daily streamflow records. This approach allows four of the model parameters to be estimated without rainfall data, and effectively facilitates partitioning of the parameter estimation procedure so that parsimonious search procedures can be used to estimate the remaining storm response parameters separately. Three methods of optimization are evaluated for estimation of four baseflow parameters. These methods are the downhill Simplex (S), Simulated Annealing combined with the Simplex method (SA) and Shuffled Complex Evolution (SCE). These estimation procedures are explored in conjunction with four objective functions: (1) ordinary least squares; (2) ordinary least squares with Box-Cox transformation; (3) ordinary least squares on prewhitened residuals; (4) ordinary least squares applied to prewhitened with Box-Cox transformation of residuals. The effects of changing the seed random generator for both SA and SCE methods are also explored, as are the effects of the bounds of the parameters. Although all schemes converge to the same values of the objective function, SCE method was found to be less sensitive to these issues than both the SA and the Simplex schemes. Parameter uncertainty and interactions are investigated through estimation of the variance-covariance matrix and confidence intervals. As expected the parameters were found to be correlated and the covariance matrix was found to be not diagonal. Furthermore, the linearized confidence interval theory failed for about one-fourth of the catchments while the maximum likelihood theory did not fail for any of the catchments.
Information fusion methods based on physical laws.
Rao, Nageswara S V; Reister, David B; Barhen, Jacob
2005-01-01
We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.
NASA Astrophysics Data System (ADS)
Nair, S. P.; Righetti, R.
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J
2018-07-01
Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.
On-line implementation of nonlinear parameter estimation for the Space Shuttle main engine
NASA Technical Reports Server (NTRS)
Buckland, Julia H.; Musgrave, Jeffrey L.; Walker, Bruce K.
1992-01-01
We investigate the performance of a nonlinear estimation scheme applied to the estimation of several parameters in a performance model of the Space Shuttle Main Engine. The nonlinear estimator is based upon the extended Kalman filter which has been augmented to provide estimates of several key performance variables. The estimated parameters are directly related to the efficiency of both the low pressure and high pressure fuel turbopumps. Decreases in the parameter estimates may be interpreted as degradations in turbine and/or pump efficiencies which can be useful measures for an online health monitoring algorithm. This paper extends previous work which has focused on off-line parameter estimation by investigating the filter's on-line potential from a computational standpoint. ln addition, we examine the robustness of the algorithm to unmodeled dynamics. The filter uses a reduced-order model of the engine that includes only fuel-side dynamics. The on-line results produced during this study are comparable to off-line results generated previously. The results show that the parameter estimates are sensitive to dynamics not included in the filter model. Off-line results using an extended Kalman filter with a full order engine model to address the robustness problems of the reduced-order model are also presented.
Does Status Inconsistency Matter for Marital Quality?
ERIC Educational Resources Information Center
Gong, Min
2007-01-01
This study tests status inconsistency theory by examining the associations between wives' and husbands' relative statuses--that is, earnings, work-time, occupational, and educational inconsistencies--and marital quality and global happiness. The author asks three questions: (a) Is status inconsistency associated with marital quality and overall…
Effects of Inconsistent Behaviors on Person Impressions: A Multidimensional Study.
ERIC Educational Resources Information Center
Vonk, Roos
1995-01-01
Examined effects of unexpected behavioral information on person impressions. Inconsistency was manipulated with respect to Implicit Personality Theory. Found that behaviors with inconsistent evaluation implications did not affect impressions and that effects of inconsistent information depended on dimension of contrast, valence of initial…
Parameterization of Ca+2-protein interactions for molecular dynamics simulations.
Project, Elad; Nachliel, Esther; Gutman, Menachem
2008-05-01
Molecular dynamics simulations of Ca+2 ions near protein were performed with three force fields: GROMOS96, OPLS-AA, and CHARMM22. The simulations reveal major, force-field dependent, inconsistencies in the interaction between the Ca+2 ions with the protein. The variations are attributed to the nonbonded parameterizations of the Ca+2-carboxylates interactions. The simulations results were compared to experimental data, using the Ca+2-HCOO- equilibrium as a model. The OPLS-AA force field grossly overestimates the binding affinity of the Ca+2 ions to the carboxylate whereas the GROMOS96 and CHARMM22 force fields underestimate the stability of the complex. Optimization of the Lennard-Jones parameters for the Ca+2-carboxylate interactions were carried out, yielding new parameters which reproduce experimental data. Copyright 2007 Wiley Periodicals, Inc.
Positive signs in massive gravity
NASA Astrophysics Data System (ADS)
Cheung, Clifford; Remmen, Grant N.
2016-04-01
We derive new constraints on massive gravity from unitarity and analyticity of scattering amplitudes. Our results apply to a general effective theory defined by Einstein gravity plus the leading soft diffeomorphism-breaking corrections. We calculate scattering amplitudes for all combinations of tensor, vector, and scalar polarizations. The high-energy behavior of these amplitudes prescribes a specific choice of couplings that ameliorates the ultraviolet cutoff, in agreement with existing literature. We then derive consistency conditions from analytic dispersion relations, which dictate positivity of certain combinations of parameters appearing in the forward scattering amplitudes. These constraints exclude all but a small island in the parameter space of ghost-free massive gravity. While the theory of the "Galileon" scalar mode alone is known to be inconsistent with positivity constraints, this is remedied in the full massive gravity theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Clifford; Remmen, Grant N.
Here, we derive new constraints on massive gravity from unitarity and analyticity of scattering amplitudes. Our results apply to a general effective theory defined by Einstein gravity plus the leading soft diffeomorphism-breaking corrections. We calculate scattering amplitudes for all combinations of tensor, vector, and scalar polarizations. Furthermore, the high-energy behavior of these amplitudes prescribes a specific choice of couplings that ameliorates the ultraviolet cutoff, in agreement with existing literature. We then derive consistency conditions from analytic dispersion relations, which dictate positivity of certain combinations of parameters appearing in the forward scattering amplitudes. These constraints exclude all but a small islandmore » in the parameter space of ghost-free massive gravity. And while the theory of the "Galileon" scalar mode alone is known to be inconsistent with positivity constraints, this is remedied in the full massive gravity theory.« less
Efficient estimation of Pareto model: Some modified percentile estimators.
Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali
2018-01-01
The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.
Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean
NASA Astrophysics Data System (ADS)
Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.
2011-12-01
Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling parameter for the aerosols. The estimation method is computationally fast and can be used with more complex models where climate sensitivity is diagnosed rather than prescribed. The parameter estimates can be used to create probabilistic climate projections using the UVic ESCM model in future studies.
Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models
ERIC Educational Resources Information Center
Raykov, Tenko
2005-01-01
A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2018-01-01
Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.
Interseismic Deformation on the San Andreas Fault System
NASA Astrophysics Data System (ADS)
Segall, P.
2001-12-01
Interseismic deformation measurements are most often interpreted in terms of steady slip on buried elastic dislocations. While such models often yield slip-rates that are in reasonable accord with geologic observations, they are: 1) inconsistent with observations of transient deformation following large earthquakes, and 2) tend to predict locking depths significantly deeper than recent large earthquakes. An alternate two-dimensional model of repeating earthquakes that break an elastic plate of thickness H, overlying a viscoelastic half-space with relaxation time tR (Savage and Prescott, 1978) involves 5 parameters; H, tR, t, T, and ˙ {s}, where t is the time since the last quake, T is the earthquake cycle time, and ˙ {s} is the slip-rate. Many parts of the SAF system involve multiple parallel faults, which further increases the number of parameters to be estimated. All hope is not lost, however, if we make use of a priori constraints on slip-rate from geologic studies, and utilize measurements of time dependent strain following the 1906 earthquake, in addition to the present day spatial distribution of deformation-rate. GPS data from the Carrizo Plain segment of the SAF imply a considerably larger relaxation time than inferred from the post-1906 strain-rate transient. This indicates that either the crustal structure differs significantly between northern and central California, or that the simple model is deficient, either due to time-dependent down-dip slip following large earthquakes or non-linear rheology. To test the effect of regional variations in H and tR, I analyze data from the northern San Francisco Bay area (Prescott et al, 2001, JGR), and include the SAF, the Hayward-Rogers Creek (HRC), and Concord-Green Valley faults (CGV). Non-linear optimization using simulated annealing and constrained non-linear least squares yields an optimal model with: H ~ 10 km, tR ~ 34 years, TSAF = 205 years, ˙ {s}SAF ~ 18 mm/yr, tHRC = 225 years, T{ HRC} = 630 years, and ˙ {s}{HRC } ~ 13 mm/yr, ˙ {s}CGV ~ 9 mm/yr. Adding the constraint that the coseismic slip in major Hayward and San Andreas events not exceed 3.0 m and 7.0 m, respectively yields an optimal model with: H ~ 18 km, tR ~ 36 years, TSAF = 280 years, ˙ {s}SAF = 25 mm/yr, tHRC = 225 years, T{ HRC} = 276 years, and ˙ {s}{HRC } ~ 11 mm/yr, ˙ {s}CGV ~ 9 mm/yr. These estimates are in reasonable accord with independent paleoseismic results. The conclusion of this pilot study is that by combining the present day deformation field, post-1906 strain data, and geologic bounds on slip-rate and maximum earthquake slip, we can estimate parameters of considerable geophysical interest, including time since past quakes and average recurrence interval.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Nonlinear, discrete flood event models, 1. Bayesian estimation of parameters
NASA Astrophysics Data System (ADS)
Bates, Bryson C.; Townley, Lloyd R.
1988-05-01
In this paper (Part 1), a Bayesian procedure for parameter estimation is applied to discrete flood event models. The essence of the procedure is the minimisation of a sum of squares function for models in which the computed peak discharge is nonlinear in terms of the parameters. This objective function is dependent on the observed and computed peak discharges for several storms on the catchment, information on the structure of observation error, and prior information on parameter values. The posterior covariance matrix gives a measure of the precision of the estimated parameters. The procedure is demonstrated using rainfall and runoff data from seven Australian catchments. It is concluded that the procedure is a powerful alternative to conventional parameter estimation techniques in situations where a number of floods are available for parameter estimation. Parts 2 and 3 will discuss the application of statistical nonlinearity measures and prediction uncertainty analysis to calibrated flood models. Bates (this volume) and Bates and Townley (this volume).
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
Estimating parameter of influenza transmission using regularized least square
NASA Astrophysics Data System (ADS)
Nuraini, N.; Syukriah, Y.; Indratno, S. W.
2014-02-01
Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
3-D simulations of M9 earthquakes on the Cascadia Megathrust: Key parameters and uncertainty
Wirth, Erin; Frankel, Arthur; Vidale, John; Marafi, Nasser A.; Stephenson, William J.
2017-01-01
Geologic and historical records indicate that the Cascadia subduction zone is capable of generating large, megathrust earthquakes up to magnitude 9. The last great Cascadia earthquake occurred in 1700, and thus there is no direct measure on the intensity of ground shaking or specific rupture parameters from seismic recordings. We use 3-D numerical simulations to generate broadband (0-10 Hz) synthetic seismograms for 50 M9 rupture scenarios on the Cascadia megathrust. Slip consists of multiple high-stress drop subevents (~M8) with short rise times on the deeper portion of the fault, superimposed on a background slip distribution with longer rise times. We find a >4x variation in the intensity of ground shaking depending upon several key parameters, including the down-dip limit of rupture, the slip distribution and location of strong-motion-generating subevents, and the hypocenter location. We find that extending the down-dip limit of rupture to the top of the non-volcanic tremor zone results in a ~2-3x increase in peak ground acceleration for the inland city of Seattle, Washington, compared to a completely offshore rupture. However, our simulations show that allowing the rupture to extend to the up-dip limit of tremor (i.e., the deepest rupture extent in the National Seismic Hazard Maps), even when tapering the slip to zero at the down-dip edge, results in multiple areas of coseismic coastal uplift. This is inconsistent with coastal geologic evidence (e.g., buried soils, submerged forests), which suggests predominantly coastal subsidence for the 1700 earthquake and previous events. Defining the down-dip limit of rupture as the 1 cm/yr locking contour (i.e., mostly offshore) results in primarily coseismic subsidence at coastal sites. We also find that the presence of deep subevents can produce along-strike variations in subsidence and ground shaking along the coast. Our results demonstrate the wide range of possible ground motions from an M9 megathrust earthquake in Cascadia, and the potential to further constrain key rupture parameters using geologic and geophysical observations, ultimately improving our estimation of seismic hazard associated with the Cascadia megathrust.
2D Fast Vessel Visualization Using a Vessel Wall Mask Guiding Fine Vessel Detection
Raptis, Sotirios; Koutsouris, Dimitris
2010-01-01
The paper addresses the fine retinal-vessel's detection issue that is faced in diagnostic applications and aims at assisting in better recognizing fine vessel anomalies in 2D. Our innovation relies in separating key visual features vessels exhibit in order to make the diagnosis of eventual retinopathologies easier to detect. This allows focusing on vessel segments which present fine changes detectable at different sampling scales. We advocate that these changes can be addressed as subsequent stages of the same vessel detection procedure. We first carry out an initial estimate of the basic vessel-wall's network, define the main wall-body, and then try to approach the ridges and branches of the vasculature's using fine detection. Fine vessel screening looks into local structural inconsistencies in vessels properties, into noise, or into not expected intensity variations observed inside pre-known vessel-body areas. The vessels are first modelled sufficiently but not precisely by their walls with a tubular model-structure that is the result of an initial segmentation. This provides a chart of likely Vessel Wall Pixels (VWPs) yielding a form of a likelihood vessel map mainly based on gradient filter's intensity and spatial arrangement parameters (e.g., linear consistency). Specific vessel parameters (centerline, width, location, fall-away rate, main orientation) are post-computed by convolving the image with a set of pre-tuned spatial filters called Matched Filters (MFs). These are easily computed as Gaussian-like 2D forms that use a limited range sub-optimal parameters adjusted to the dominant vessel characteristics obtained by Spatial Grey Level Difference statistics limiting the range of search into vessel widths of 16, 32, and 64 pixels. Sparse pixels are effectively eliminated by applying a limited range Hough Transform (HT) or region growing. Major benefits are limiting the range of parameters, reducing the search-space for post-convolution to only masked regions, representing almost 2% of the 2D volume, good speed versus accuracy/time trade-off. Results show the potentials of our approach in terms of time for detection ROC analysis and accuracy of vessel pixel (VP) detection. PMID:20706682
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.
Parameter estimating state reconstruction
NASA Technical Reports Server (NTRS)
George, E. B.
1976-01-01
Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.
Parameter Estimation in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark; Colarco, Peter
2004-01-01
In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiu, Weihsueh A., E-mail: chiu.weihsueh@epa.gov; Ginsberg, Gary L., E-mail: gary.ginsberg@ct.gov
2011-06-15
This article reports on the development of a 'harmonized' PBPK model for the toxicokinetics of perchloroethylene (tetrachloroethylene or perc) in mice, rats, and humans that includes both oxidation and glutathione (GSH) conjugation of perc, the internal kinetics of the oxidative metabolite trichloroacetic acid (TCA), and the urinary excretion kinetics of the GSH conjugation metabolites N-Acetylated trichlorovinyl cysteine and dichloroacetic acid. The model utilizes a wider range of in vitro and in vivo data than any previous analysis alone, with in vitro data used for initial, or 'baseline,' parameter estimates, and in vivo datasets separated into those used for 'calibration' andmore » those used for 'evaluation.' Parameter calibration utilizes a limited Bayesian analysis involving flat priors and making inferences only using posterior modes obtained via Markov chain Monte Carlo (MCMC). As expected, the major route of elimination of absorbed perc is predicted to be exhalation as parent compound, with metabolism accounting for less than 20% of intake except in the case of mice exposed orally, in which metabolism is predicted to be slightly over 50% at lower exposures. In all three species, the concentration of perc in blood, the extent of perc oxidation, and the amount of TCA production is well-estimated, with residual uncertainties of {approx} 2-fold. However, the resulting range of estimates for the amount of GSH conjugation is quite wide in humans ({approx} 3000-fold) and mice ({approx} 60-fold). While even high-end estimates of GSH conjugation in mice are lower than estimates of oxidation, in humans the estimated rates range from much lower to much higher than rates for perc oxidation. It is unclear to what extent this range reflects uncertainty, variability, or a combination. Importantly, by separating total perc metabolism into separate oxidative and conjugative pathways, an approach also recommended in a recent National Research Council review, this analysis reconciles the disparity between those previously published PBPK models that concluded low perc metabolism in humans and those that predicted high perc metabolism in humans. In essence, both conclusions are consistent with the data if augmented with some additional qualifications: in humans, oxidative metabolism is low, while GSH conjugation metabolism may be high or low, with uncertainty and/or interindividual variability spanning three orders of magnitude. More direct data on the internal kinetics of perc GSH conjugation, such as trichlorovinyl glutathione or tricholorvinyl cysteine in blood and/or tissues, would be needed to better characterize the uncertainty and variability in GSH conjugation in humans. - Research Highlights: >We analyze perchloroethylene (perc) toxicokinetics with a physiological model. >Results from previous analyses lumping metabolic pathways are inconsistent. >Separately tracking oxidation and conjugation pathways reconciles these results. >Available data are adequate for predicting perc blood levels and oxidation by P450. >High uncertainty remains for human conjugation of perc with glutathione.« less
Estimation of delays and other parameters in nonlinear functional differential equations
NASA Technical Reports Server (NTRS)
Banks, H. T.; Lamm, P. K. D.
1983-01-01
A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jufeng; Xia, Bing; Shang, Yunlong
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...
2016-12-22
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Ruiz-Gutierrez, Viviana; Zipkin, Elise F.; Dhondt, Andre A.
2010-01-01
1. Worldwide loss of biodiversity necessitates a clear understanding of the factors driving population declines as well as informed predictions about which species and populations are at greatest risk. The biggest threat to the long-term persistence of populations is the reduction and changes in configuration of their natural habitat. 2. Inconsistencies have been noted in the responses of populations to the combined effects of habitat loss and fragmentation. These have been widely attributed to the effects of the matrix habitats in which remnant focal habitats are typically embedded. 3. We quantified the potential effects of the inter-patch matrix by estimating occupancy and colonization of forest and surrounding non-forest matrix (NF). We estimated species-specific parameters using a dynamic, multi-species hierarchical model on a bird community in southwestern Costa Rica. 4. Overall, we found higher probabilities of occupancy and colonization of forest relative to the NF across bird species, including those previously categorized as open habitat generalists not needing forest to persist. Forest dependency was a poor predictor of occupancy dynamics in our study region, largely predicting occupancy and colonization of only non-forest habitats. 5. Our results indicate that the protection of remnant forest habitats is key for the long-term persistence of all members of the bird community in this fragmented landscape, including species typically associated with open, non-forest habitats. 6.Synthesis and applications. We identified 39 bird species of conservation concern defined by having high estimates of forest occupancy, and low estimates of occupancy and colonization of non-forest. These species survive in forest but are unlikely to venture out into open, non-forested habitats, therefore, they are vulnerable to the effects of habitat loss and fragmentation. Our hierarchical community-level model can be used to estimate species-specific occupancy dynamics for focal and inter-patch matrix habitats to identify which species within a community are likely to be impacted most by habitat loss and fragmentation. This model can be applied to other taxa (i.e. amphibians, mammals and insects) to estimate species and community occurrence dynamics in response to current environmental conditions and to make predictions in response to future changes in habitat configurations.
Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang
2017-05-01
The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.
Development of a Response Inconsistency Scale for the Personality Inventory for DSM-5.
Keeley, Jared W; Webb, Christopher; Peterson, Destiny; Roussin, Lindsey; Flanagan, Elizabeth H
2016-01-01
The advent of a dimensional model of personality disorder included in DSM-5 has necessitated the development of a new measurement scheme, specifically a self-report questionnaire termed the Personality Inventory for DSM-5 (PID-5; Krueger, Derringer, Markon, Watson, & Skodol, 2012 ). However, there are many threats to the validity of a self-report measure, including response inconsistency. This study outlines the development of an inconsistency scale for the PID-5. Across both college student and clinical samples, the inconsistency scale was able to reliably differentiate real from random responding. Random responses led to increased scores on the PID-5 facets, indicating the importance of detecting inconsistent responding prior to test interpretation. Thus, this inconsistency scale could be of use to researchers and clinicians in detecting inconsistent responses to this new personality disorder measure.
NASA Astrophysics Data System (ADS)
Doury, Maxime; Dizeux, Alexandre; de Cesare, Alain; Lucidarme, Olivier; Pellot-Barakat, Claire; Bridal, S. Lori; Frouin, Frédérique
2017-02-01
Dynamic contrast-enhanced ultrasound has been proposed to monitor tumor therapy, as a complement to volume measurements. To assess the variability of perfusion parameters in ideal conditions, four consecutive test-retest studies were acquired in a mouse tumor model, using controlled injections. The impact of mathematical modeling on parameter variability was then investigated. Coefficients of variation (CV) of tissue blood volume (BV) and tissue blood flow (BF) based-parameters were estimated inside 32 sub-regions of the tumors, comparing the log-normal (LN) model with a one-compartment model fed by an arterial input function (AIF) and improved by the introduction of a time delay parameter. Relative perfusion parameters were also estimated by normalization of the LN parameters and normalization of the one-compartment parameters estimated with the AIF, using a reference tissue (RT) region. A direct estimation (rRTd) of relative parameters, based on the one-compartment model without using the AIF, was also obtained by using the kinetics inside the RT region. Results of test-retest studies show that absolute regional parameters have high CV, whatever the approach, with median values of about 30% for BV, and 40% for BF. The positive impact of normalization was established, showing a coherent estimation of relative parameters, with reduced CV (about 20% for BV and 30% for BF using the rRTd approach). These values were significantly lower (p < 0.05) than the CV of absolute parameters. The rRTd approach provided the smallest CV and should be preferred for estimating relative perfusion parameters.
Noise normalization and windowing functions for VALIDAR in wind parameter estimation
NASA Astrophysics Data System (ADS)
Beyon, Jeffrey Y.; Koch, Grady J.; Li, Zhiwen
2006-05-01
The wind parameter estimates from a state-of-the-art 2-μm coherent lidar system located at NASA Langley, Virginia, named VALIDAR (validation lidar), were compared after normalizing the noise by its estimated power spectra via the periodogram and the linear predictive coding (LPC) scheme. The power spectra and the Doppler shift estimates were the main parameter estimates for comparison. Different types of windowing functions were implemented in VALIDAR data processing algorithm and their impact on the wind parameter estimates was observed. Time and frequency independent windowing functions such as Rectangular, Hanning, and Kaiser-Bessel and time and frequency dependent apodized windowing function were compared. The briefing of current nonlinear algorithm development for Doppler shift correction subsequently follows.
A Review of Global Precipitation Data Sets: Data Sources, Estimation, and Intercomparisons
NASA Astrophysics Data System (ADS)
Sun, Qiaohong; Miao, Chiyuan; Duan, Qingyun; Ashouri, Hamed; Sorooshian, Soroosh; Hsu, Kuo-Lin
2018-03-01
In this paper, we present a comprehensive review of the data sources and estimation methods of 30 currently available global precipitation data sets, including gauge-based, satellite-related, and reanalysis data sets. We analyzed the discrepancies between the data sets from daily to annual timescales and found large differences in both the magnitude and the variability of precipitation estimates. The magnitude of annual precipitation estimates over global land deviated by as much as 300 mm/yr among the products. Reanalysis data sets had a larger degree of variability than the other types of data sets. The degree of variability in precipitation estimates also varied by region. Large differences in annual and seasonal estimates were found in tropical oceans, complex mountain areas, northern Africa, and some high-latitude regions. Overall, the variability associated with extreme precipitation estimates was slightly greater at lower latitudes than at higher latitudes. The reliability of precipitation data sets is mainly limited by the number and spatial coverage of surface stations, the satellite algorithms, and the data assimilation models. The inconsistencies described limit the capability of the products for climate monitoring, attribution, and model validation.
Koen, Joshua D.; Yonelinas, Andrew P.
2014-01-01
Although it is generally accepted that aging is associated with recollection impairments, there is considerable disagreement surrounding how healthy aging influences familiarity-based recognition. One factor that might contribute to the mixed findings regarding age differences in familiarity is the estimation method used to quantify the two mnemonic processes. Here, this issue is examined by having a group of older adults (N = 39) between 40 and 81 years of age complete Remember/Know (RK), receiver operating characteristic (ROC), and process dissociation (PD) recognition tests. Estimates of recollection, but not familiarity, showed a significant negative correlation with chronological age. Inconsistent with previous findings, the estimation method did not moderate the relationship between age and estimations of recollection and familiarity. In a final analysis, recollection and familiarity were estimated as latent factors in a confirmatory factor analysis (CFA) that modeled the covariance between measures of free recall and recognition, and the results converged with the results from the RK, PD, and ROC tasks. These results are consistent with the hypothesis that episodic memory declines in older adults are primary driven by recollection deficits, and also suggest that the estimation method plays little to no role in age-related decreases in familiarity. PMID:25485974
Estimation of Graded Response Model Parameters Using MULTILOG.
ERIC Educational Resources Information Center
Baker, Frank B.
1997-01-01
Describes an idiosyncracy of the MULTILOG (D. Thissen, 1991) parameter estimation process discovered during a simulation study involving the graded response model. A misordering reflected in boundary function location parameter estimates resulted in a large negative contribution to the true score followed by a large positive contribution. These…
Commentary: A cautionary tale regarding use of the National Land Cover Dataset 1992
Thogmartin, Wayne E.; Gallant, Alisa L.; Knutson, Melinda G.; Fox, Timothy J.; Suarez, Manuel J.
2004-01-01
Digital land-cover data are among the most popular data sources used in ecological research and natural resource management. However, processes for accurate land-cover classification over large regions are still evolving. We identified inconsistencies in the National Land Cover Dataset 1992, the most current and available representation of land cover for the conterminous United States. We also report means to address these inconsistencies in a bird-habitat model. We used a Geographic Information System (GIS) to position a regular grid (or lattice) over the upper midwestern United States and summarized the proportion of individual land covers in each cell within the lattice. These proportions were then mapped back onto the lattice, and the resultant lattice was compared to satellite paths, state borders, and regional map classification units. We observed mapping inconsistencies at the borders between mapping regions, states, and Thematic Mapper (TM) mapping paths in the upper midwestern United States, particularly related to grass I and-herbaceous, emergent-herbaceous wetland, and small-grain land covers. We attributed these discrepancies to differences in image dates between mapping regions, suboptimal image dates for distinguishing certain land-cover types, lack of suitable ancillary data for improving discrimination for rare land covers, and possibly differences among image interpreters. To overcome these inconsistencies for the purpose of modeling regional populations of birds, we combined grassland-herbaceous and pasture-hay land-cover classes and excluded the use of emergent-herbaceous and small-grain land covers. We recommend that users of digital land-cover data conduct similar assessments for other regions before using these data for habitat evaluation. Further, caution is advised in using these data in the analysis of regional land-cover change because it is not likely that future digital land-cover maps will repeat the same problems, thus resulting in biased estimates of change.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Infrared thermal imaging of the inner canthus of the eye as an estimator of body core temperature.
Teunissen, L P J; Daanen, H A M
2011-01-01
Several studies suggest that the temperature of the inner canthus of the eye (T(ca)), determined with infrared thermal imaging, is an appropriate method for core temperature estimation in mass screening of fever. However, these studies used the error prone tympanic temperature as a reference. Therefore, we compared T(ca) to oesophageal temperature (T(es)) as gold standard in 10 subjects during four conditions: rest, exercise, recovery and passive heating. T(ca) and T(es) differed significantly during all conditions (mean ΔT(es) - T(ca) 1.80 ± 0.89°C) and their relationship was inconsistent between conditions. Also within the rest condition alone, intersubject variability was too large for a reliable estimation of core temperature. This poses doubts on the use of T(ca) as a technique for core temperature estimation, although generalization of these results to fever detection should be verified experimentally using febrile patients.
Automated verification of flight software. User's manual
NASA Technical Reports Server (NTRS)
Saib, S. H.
1982-01-01
(Automated Verification of Flight Software), a collection of tools for analyzing source programs written in FORTRAN and AED is documented. The quality and the reliability of flight software are improved by: (1) indented listings of source programs, (2) static analysis to detect inconsistencies in the use of variables and parameters, (3) automated documentation, (4) instrumentation of source code, (5) retesting guidance, (6) analysis of assertions, (7) symbolic execution, (8) generation of verification conditions, and (9) simplification of verification conditions. Use of AVFS in the verification of flight software is described.
Experiments and scaling laws for catastrophic collisions. [of asteroids
NASA Technical Reports Server (NTRS)
Fujiwara, A.; Cerroni, P.; Davis, D.; Ryan, E.; Di Martino, M.
1989-01-01
The existing data on shattering impacts are reviewed using natural silicate, ice, and cement-mortar targets. A comprehensive data base containing the most important parameters describing these experiments was prepared. The collisional energy needed to shatter consolidated homogeneous targets and the ensuing fragment size distributions have been well studied experimentally. However, major gaps exist in the data on fragment velocity and rotational distributions, as well as collisional energy partitioning for these targets. Current scaling laws lead to predicted outcomes of asteroid collisions that are inconsistent with interpretations of astronomical data.
24 CFR 3500.13 - Relation to State laws.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... (1) The Secretary may not determine that a State law or regulation is inconsistent with any provision... affiliated business arrangements are inconsistent with RESPA or this part, the Secretary may not construe... that are inconsistent with RESPA or this part are preempted to the extent of the inconsistency. However...
ERIC Educational Resources Information Center
Dwairy, Marwan
2010-01-01
Inconsistency in parenting is a factor that may influence children's mental health. A questionnaire, measuring three parental inconsistencies (temporal, situational, and father-mother inconsistency) was administered to adolescents in nine countries to assess its association with adolescents' psychological disorders. The results show that parental…
SOFT: a synthetic synchrotron diagnostic for runaway electrons
NASA Astrophysics Data System (ADS)
Hoppe, M.; Embréus, O.; Tinguely, R. A.; Granetz, R. S.; Stahl, A.; Fülöp, T.
2018-02-01
Improved understanding of the dynamics of runaway electrons can be obtained by measurement and interpretation of their synchrotron radiation emission. Models for synchrotron radiation emitted by relativistic electrons are well established, but the question of how various geometric effects—such as magnetic field inhomogeneity and camera placement—influence the synchrotron measurements and their interpretation remains open. In this paper we address this issue by simulating synchrotron images and spectra using the new synthetic synchrotron diagnostic tool SOFT (Synchrotron-detecting Orbit Following Toolkit). We identify the key parameters influencing the synchrotron radiation spot and present scans in those parameters. Using a runaway electron distribution function obtained by Fokker-Planck simulations for parameters from an Alcator C-Mod discharge, we demonstrate that the corresponding synchrotron image is well-reproduced by SOFT simulations, and we explain how it can be understood in terms of the parameter scans. Geometric effects are shown to significantly influence the synchrotron spectrum, and we show that inherent inconsistencies in a simple emission model (i.e. not modeling detection) can lead to incorrect interpretation of the images.
Effect of heating rate on kinetic parameters of β-irradiated Li2B4O7:Cu,Ag,P in TSL measurements
NASA Astrophysics Data System (ADS)
Türkler Ege, A.; Ekdal, E.; Karali, T.; Can, N.; Prokic, M.
2007-03-01
The effect of heating rate on the thermally stimulated luminescence (TSL) emission due to the temperature lag (TLA) between the TSL material and the heating element has been investigated using Li2B4O7:Cu,Ag,P dosimetric materials. The TLA becomes significant when the material is heated at high heating rates. TSL glow curves of Li2B4O7:Cu,Ag,P material showed two main peaks after β-irradiation. The kinetic parameters, namely activation energy (E) and frequency factor (s) associated with the high temperature main peak of Li2B4O7:Cu,Ag,P were determined using the method of various heating rates (VHR), in which heating rates from 1 to 40 K s-1 were used. It is assumed that non-ideal heat transfer between the heater and the material may cause significant inconsistency of kinetic parameter values obtained with different methods. The effect of TLA on kinetic parameters of the dosimeter was examined.
NASA Astrophysics Data System (ADS)
Simon, E.; Bertino, L.; Samuelsen, A.
2011-12-01
Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.
Relative effects of survival and reproduction on the population dynamics of emperor geese
Schmutz, Joel A.; Rockwell, Robert F.; Petersen, Margaret R.
1997-01-01
Populations of emperor geese (Chen canagica) in Alaska declined sometime between the mid-1960s and the mid-1980s and have increased little since. To promote recovery of this species to former levels, managers need to know how much their perturbations of survival and/or reproduction would affect population growth rate (λ). We constructed an individual-based population model to evaluate the relative effect of altering mean values of various survival and reproductive parameters on λ and fall age structure (AS, defined as the proportion of juv), assuming additive rather than compensatory relations among parameters. Altering survival of adults had markedly greater relative effects on λ than did equally proportionate changes in either juvenile survival or reproductive parameters. We found the opposite pattern for relative effects on AS. Due to concerns about bias in the initial parameter estimates used in our model, we used 5 additional sets of parameter estimates with this model structure. We found that estimates of survival based on aerial survey data gathered each fall resulted in models that corresponded more closely to independent estimates of λ than did models that used mark-recapture estimates of survival. This disparity suggests that mark-recapture estimates of survival are biased low. To further explore how parameter estimates affected estimates of λ, we used values of survival and reproduction found in other goose species, and we examined the effect of an hypothesized correlation between an individual's clutch size and the subsequent survival of her young. The rank order of parameters in their relative effects on λ was consistent for all 6 parameter sets we examined. The observed variation in relative effects on λ among the 6 parameter sets is indicative of how relative effects on λ may vary among goose populations. With this knowledge of the relative effects of survival and reproductive parameters on λ, managers can make more informed decisions about which parameters to influence through management or to target for future study.
Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV
NASA Astrophysics Data System (ADS)
Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.
2011-04-01
When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.
Table look-up estimation of signal and noise parameters from quantized observables
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1986-01-01
A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.
Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.
Van, Anh T; Hernando, Diego; Sutton, Bradley P
2011-11-01
A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.
Prevalence of psychiatric disorders in the Texas juvenile correctional system.
Harzke, Amy Jo; Baillargeon, Jacques; Baillargeon, Gwen; Henry, Judith; Olvera, Rene L; Torrealday, Ohiana; Penn, Joseph V; Parikh, Rajendra
2012-04-01
Most studies assessing the burden of psychiatric disorders in juvenile correctional facilities have been based on small or male-only samples or have focused on a single disorder. Using electronic data routinely collected by the Texas juvenile correctional system and its contracted medical provider organization, we estimated the prevalence of selected psychiatric disorders among youths committed to Texas juvenile correctional facilities between January 1, 2004, and December 31, 2008 (N = 11,603). Ninety-eight percent were diagnosed with at least one of the disorders. Highest estimated prevalence was for conduct disorder (83.2%), followed by any substance use disorder (75.6%), any bipolar disorder (19.4%), attention-deficit/hyperactivity disorder (18.3%), and any depressive disorder (12.6%). The estimated prevalence of psychiatric disorders among these youths was exceptionally high and showed patterns by sex, race/ethnicity, and age that were both consistent and inconsistent with other juvenile justice samples.
Ramalho, Fátima; Santos-Rocha, Rita; Branco, Marco; Moniz-Pereira, Vera; André, Helô-Isa; Veloso, António P; Carnide, Filomena
2018-01-01
Gait ability in older adults has been associated with independent living, increased survival rates, fall prevention, and quality of life. There are inconsistent findings regarding the effects of exercise interventions in the maintenance of gait parameters. The aim of the study was to analyze the effects of a community-based periodized exercise intervention on the improvement of gait parameters and functional fitness in an older adult group compared with a non-periodized program. A quasi-experimental study with follow-up was performed in a periodized exercise group (N=15) and in a non-periodized exercise group (N=13). The primary outcomes were plantar pressure gait parameters, and the secondary outcomes were physical activity, aerobic endurance, lower limb strength, agility, and balance. These variables were recorded at baseline and after 6 months of intervention. Both programs were tailored to older adults' functional fitness level and proved to be effective in reducing the age-related decline regarding functional fitness and gait parameters. Gait parameters were sensitive to both the exercise interventions. These exercise protocols can be used by exercise professionals in prescribing community exercise programs, as well as by health professionals in promoting active aging.
Why the impact of mechanical stimuli on stem cells remains a challenge.
Goetzke, Roman; Sechi, Antonio; De Laporte, Laura; Neuss, Sabine; Wagner, Wolfgang
2018-05-04
Mechanical stimulation affects growth and differentiation of stem cells. This may be used to guide lineage-specific cell fate decisions and therefore opens fascinating opportunities for stem cell biology and regenerative medicine. Several studies demonstrated functional and molecular effects of mechanical stimulation but on first sight these results often appear to be inconsistent. Comparison of such studies is hampered by a multitude of relevant parameters that act in concert. There are notorious differences between species, cell types, and culture conditions. Furthermore, the utilized culture substrates have complex features, such as surface chemistry, elasticity, and topography. Cell culture substrates can vary from simple, flat materials to complex 3D scaffolds. Last but not least, mechanical forces can be applied with different frequency, amplitude, and strength. It is therefore a prerequisite to take all these parameters into consideration when ascribing their specific functional relevance-and to only modulate one parameter at the time if the relevance of this parameter is addressed. Such research questions can only be investigated by interdisciplinary cooperation. In this review, we focus particularly on mesenchymal stem cells and pluripotent stem cells to discuss relevant parameters that contribute to the kaleidoscope of mechanical stimulation of stem cells.
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
ERIC Educational Resources Information Center
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael
2016-04-01
The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.
Okahara, Shigeyuki; Zu Soh; Takahashi, Shinya; Sueda, Taijiro; Tsuji, Toshio
2016-08-01
We proposed a blood viscosity estimation method based on pressure-flow characteristics of oxygenators used during cardiopulmonary bypass (CPB) in a previous study that showed the estimated viscosity to correlate well with the measured viscosity. However, the determination of the parameters included in the method required the use of blood, thereby leading to high cost of calibration. Therefore, in this study we propose a new method to monitor blood viscosity, which approximates the pressure-flow characteristics of blood considered as a non-Newtonian fluid with characteristics of a Newtonian fluid by using the parameters derived from glycerin solution to enable ease of acquisition. Because parameters used in the estimation method are based on fluid types, bovine blood parameters were used to calculate estimated viscosity (ηe), and glycerin parameters were used to estimate deemed viscosity (ηdeem). Three samples of whole bovine blood with different hematocrit levels (21.8%, 31.0%, and 39.8%) were prepared and perfused into the oxygenator. As the temperature changed from 37 °C to 27 °C, the oxygenator mean inlet pressure and outlet pressure were recorded for flows of 2 L/min and 4 L/min, and the viscosity was estimated. The value of deemed viscosity calculated with the glycerin parameters was lower than estimated viscosity calculated with bovine blood parameters by 20-33% at 21.8% hematocrit, 12-27% at 31.0% hematocrit, and 10-15% at 39.8% hematocrit. Furthermore, deemed viscosity was lower than estimated viscosity by 10-30% at 2 L/min and 30-40% at 4 L/min. Nevertheless, estimated and deemed viscosities varied with a similar slope. Therefore, this shows that deemed viscosity achieved using glycerin parameters may be capable of successfully monitoring relative viscosity changes of blood in a perfusing oxygenator.
Nam, Kanghyun
2015-11-11
This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle's cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data.
NASA Astrophysics Data System (ADS)
O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.
2017-07-01
The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.
Overview and benchmark analysis of fuel cell parameters estimation for energy management purposes
NASA Astrophysics Data System (ADS)
Kandidayeni, M.; Macias, A.; Amamou, A. A.; Boulon, L.; Kelouwani, S.; Chaoui, H.
2018-03-01
Proton exchange membrane fuel cells (PEMFCs) have become the center of attention for energy conversion in many areas such as automotive industry, where they confront a high dynamic behavior resulting in their characteristics variation. In order to ensure appropriate modeling of PEMFCs, accurate parameters estimation is in demand. However, parameter estimation of PEMFC models is highly challenging due to their multivariate, nonlinear, and complex essence. This paper comprehensively reviews PEMFC models parameters estimation methods with a specific view to online identification algorithms, which are considered as the basis of global energy management strategy design, to estimate the linear and nonlinear parameters of a PEMFC model in real time. In this respect, different PEMFC models with different categories and purposes are discussed first. Subsequently, a thorough investigation of PEMFC parameter estimation methods in the literature is conducted in terms of applicability. Three potential algorithms for online applications, Recursive Least Square (RLS), Kalman filter, and extended Kalman filter (EKF), which has escaped the attention in previous works, have been then utilized to identify the parameters of two well-known semi-empirical models in the literature, Squadrito et al. and Amphlett et al. Ultimately, the achieved results and future challenges are discussed.
2014-01-01
The single parameter hyperbolic model has been frequently used to describe value discounting as a function of time and to differentiate substance abusers and non-clinical participants with the model's parameter k. However, k says little about the mechanisms underlying the observed differences. The present study evaluates several alternative models with the purpose of identifying whether group differences stem from differences in subjective valuation, and/or time perceptions. Using three two-parameter models, plus secondary data analyses of 14 studies with 471 indifference point curves, results demonstrated that adding a valuation, or a time perception function led to better model fits. However, the gain in fit due to the flexibility granted by a second parameter did not always lead to a better understanding of the data patterns and corresponding psychological processes. The k parameter consistently indexed group and context (magnitude) differences; it is thus a mixed measure of person and task level effects. This was similar for a parameter meant to index payoff devaluation. A time perception parameter, on the other hand, fluctuated with contexts in a non-predicted fashion and the interpretation of its values was inconsistent with prior findings that supported enlarged perceived delays for substance abusers compared to controls. Overall, the results provide mixed support for hyperbolic models of intertemporal choice in terms of the psychological meaning afforded by their parameters. PMID:25390941
Inconsistency in reaction time across the life span.
Williams, Benjamin R; Hultsch, David F; Strauss, Esther H; Hunter, Michael A; Tannock, Rosemary
2005-01-01
Inconsistency in latency across trials of 2-choice reaction time data was analyzed in 273 participants ranging in age from 6 to 81 years. A U-shaped curve defined the relationship between age and inconsistency, with increases in age associated with lower inconsistency throughout childhood and higher inconsistency throughout adulthood. Differences in inconsistency were independent of practice, fatigue, and age-related differences in mean level of performance. Evidence for general and specific variability-producing processes was found in those aged less than 21 years, whereas only a specific process, such as attentional blocks, was evident for those 21 years and older. The findings highlight the importance of considering moment-to-moment changes in performance in psychological research. 2005 APA
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Reliability analysis of structural ceramic components using a three-parameter Weibull distribution
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Powers, Lynn M.; Starlinger, Alois
1992-01-01
Described here are nonlinear regression estimators for the three-Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data.
Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.
2017-09-01
This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
Determining wave direction using curvature parameters.
de Queiroz, Eduardo Vitarelli; de Carvalho, João Luiz Baptista
2016-01-01
The curvature of the sea wave was tested as a parameter for estimating wave direction in the search for better results in estimates of wave direction in shallow waters, where waves of different sizes, frequencies and directions intersect and it is difficult to characterize. We used numerical simulations of the sea surface to determine wave direction calculated from the curvature of the waves. Using 1000 numerical simulations, the statistical variability of the wave direction was determined. The results showed good performance by the curvature parameter for estimating wave direction. Accuracy in the estimates was improved by including wave slope parameters in addition to curvature. The results indicate that the curvature is a promising technique to estimate wave directions.•In this study, the accuracy and precision of curvature parameters to measure wave direction are analyzed using a model simulation that generates 1000 wave records with directional resolution.•The model allows the simultaneous simulation of time-series wave properties such as sea surface elevation, slope and curvature and they were used to analyze the variability of estimated directions.•The simultaneous acquisition of slope and curvature parameters can contribute to estimates wave direction, thus increasing accuracy and precision of results.
Lord, Dominique
2006-07-01
There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.
Burton, Catherine L; Strauss, Esther; Hultsch, David F; Hunter, Michael A
2009-09-01
The purpose of the present study was to investigate whether inconsistency in reaction time (RT) is predictive of older adults' ability to solve everyday problems. A sample of 304 community dwelling non-demented older adults, ranging in age from 62 to 92, completed a measure of everyday problem solving, the Everyday Problems Test (EPT). Inconsistency in latencies across trials was assessed on four RT tasks. Performance on the EPT was found to vary according to age and cognitive status. Both mean latencies and inconsistency were significantly associated with EPT performance, such that slower and more inconsistent RTs were associated with poorer everyday problem solving abilities. Even after accounting for age, education, and mean level of performance, inconsistency in reaction time continued to account for a significant proportion of the variance in EPT scores. These findings suggest that indicators of inconsistency in RT may be of functional relevance.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.