Clayson, Peter E; Miller, Gregory A
2017-01-01
Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.
Sample Size for Estimation of G and Phi Coefficients in Generalizability Theory
ERIC Educational Resources Information Center
Atilgan, Hakan
2013-01-01
Problem Statement: Reliability, which refers to the degree to which measurement results are free from measurement errors, as well as its estimation, is an important issue in psychometrics. Several methods for estimating reliability have been suggested by various theories in the field of psychometrics. One of these theories is the generalizability…
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
General theory of remote gaze estimation using the pupil center and corneal reflections.
Guestrin, Elias Daniel; Eizenman, Moshe
2006-06-01
This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented.
Optimal Mass Transport for Statistical Estimation, Image Analysis, Information Geometry, and Control
2017-01-10
Metric Uncertainty for Spectral Estimation based on Nevanlinna-Pick Interpolation, (with J. Karlsson) Intern. Symp. on the Math . Theory of Networks and...Systems, Melbourne 2012. 22. Geometric tools for the estimation of structured covariances, (with L. Ning, X. Jiang) Intern. Symposium on the Math . Theory...estimation and the reversibility of stochastic processes, (with Y. Chen, J. Karlsson) Proc. Int. Symp. on Math . Theory of Networks and Syst., July
Using SAS PROC MCMC for Item Response Theory Models
Samonte, Kelli
2014-01-01
Interest in using Bayesian methods for estimating item response theory models has grown at a remarkable rate in recent years. This attentiveness to Bayesian estimation has also inspired a growth in available software such as WinBUGS, R packages, BMIRT, MPLUS, and SAS PROC MCMC. This article intends to provide an accessible overview of Bayesian methods in the context of item response theory to serve as a useful guide for practitioners in estimating and interpreting item response theory (IRT) models. Included is a description of the estimation procedure used by SAS PROC MCMC. Syntax is provided for estimation of both dichotomous and polytomous IRT models, as well as a discussion on how to extend the syntax to accommodate more complex IRT models. PMID:29795834
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1976-01-01
A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.
Relationships between digital signal processing and control and estimation theory
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1978-01-01
Research directions in the fields of digital signal processing and modern control and estimation theory are discussed. Stability theory, linear prediction and parameter identification, system synthesis and implementation, two-dimensional filtering, decentralized control and estimation, and image processing are considered in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the disciplines.
Bootstrap Estimates of Standard Errors in Generalizability Theory
ERIC Educational Resources Information Center
Tong, Ye; Brennan, Robert L.
2007-01-01
Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…
Systemic Operational Design: An Alternative to Estimate Planning
2009-05-04
relationships found in the COE. Framing and campaign design, with emphasis on systems theory , have therefore made their way to the forefront of doctrinal...short explanation of the systems theory behind SOD, examines how the SOD process happens, and compares SOD with the time proven “Commander’s Estimate... Theory , Campaign planning, Contemporary Operating Environment, Commander’s Estimate Process, Operational design 16. SECURITY CLASSIFICATION OF
The Mapping Model: A Cognitive Theory of Quantitative Estimation
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2008-01-01
How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…
An Estimation Theory for Differential Equations and other Problems, with Applications.
1981-11-01
order differential -8- operators and M-operators, in particular, the Perron - Frobenius theory and generalizations. Convergence theory for iterative... THEORY FOR DIFFERENTIAL 0EQUATIONS AND OTHER FROBLEMS, WITH APPLICATIONS 0 ,Final Technical Report by Johann Schr6der November, 1981 EUROPEAN RESEARCH...COVERED An estimation theory for differential equations Final Report and other problrms, with app)lications A981 6. PERFORMING ORG. RN,-ORT NUMfFR 7
ERIC Educational Resources Information Center
Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John
2007-01-01
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval estimation and bisection and impact of secondary…
Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients
ERIC Educational Resources Information Center
Andersson, Björn; Xin, Tao
2018-01-01
In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…
ERIC Educational Resources Information Center
Bazaldua, Diego A. Luna; Lee, Young-Sun; Keller, Bryan; Fellers, Lauren
2017-01-01
The performance of various classical test theory (CTT) item discrimination estimators has been compared in the literature using both empirical and simulated data, resulting in mixed results regarding the preference of some discrimination estimators over others. This study analyzes the performance of various item discrimination estimators in CTT:…
NASA Technical Reports Server (NTRS)
Iben, I., Jr.
1971-01-01
Survey of recently published studies on globular clusters, and comparison of stellar evolution and pulsation theory with reported observations. The theory of stellar evolution is shown to be capable of describing, in principle, the behavior of a star through all quasi-static stages. Yet, as might be expected, estimates of bulk properties obtained by comparing observations with results of pulsation and stellar atmosphere theory differ somewhat from estimates of these same properties obtained by comparing observations with results of evolution theory. A description is given of how such estimates are obtained, and suggestions are offered as to where the weak points in each theory may lie.
NASA Technical Reports Server (NTRS)
Breedlove, W. J., Jr.
1976-01-01
Major activities included coding and verifying equations of motion for the earth-moon system. Some attention was also given to numerical integration methods and parameter estimation methods. Existing analytical theories such as Brown's lunar theory, Eckhardt's theory for lunar rotation, and Newcomb's theory for the rotation of the earth were coded and verified. These theories serve as checks for the numerical integration. Laser ranging data for the period January 1969 - December 1975 was collected and stored on tape. The main goal of this research is the development of software to enable physical parameters of the earth-moon system to be estimated making use of data available from the Lunar Laser Ranging Experiment and the Very Long Base Interferometry experiment of project Apollo. A more specific goal is to develop software for the estimation of certain physical parameters of the moon such as inertia ratios, and the third and fourth harmonic gravity coefficients.
Inverse Theory for Petroleum Reservoir Characterization and History Matching
NASA Astrophysics Data System (ADS)
Oliver, Dean S.; Reynolds, Albert C.; Liu, Ning
This book is a guide to the use of inverse theory for estimation and conditional simulation of flow and transport parameters in porous media. It describes the theory and practice of estimating properties of underground petroleum reservoirs from measurements of flow in wells, and it explains how to characterize the uncertainty in such estimates. Early chapters present the reader with the necessary background in inverse theory, probability and spatial statistics. The book demonstrates how to calculate sensitivity coefficients and the linearized relationship between models and production data. It also shows how to develop iterative methods for generating estimates and conditional realizations. The text is written for researchers and graduates in petroleum engineering and groundwater hydrology and can be used as a textbook for advanced courses on inverse theory in petroleum engineering. It includes many worked examples to demonstrate the methodologies and a selection of exercises.
NASA Technical Reports Server (NTRS)
Balas, Mark J.; Thapa Magar, Kaman S.; Frost, Susan A.
2013-01-01
A theory called Adaptive Disturbance Tracking Control (ADTC) is introduced and used to track the Tip Speed Ratio (TSR) of 5 MW Horizontal Axis Wind Turbine (HAWT). Since ADTC theory requires wind speed information, a wind disturbance generator model is combined with lower order plant model to estimate the wind speed as well as partial states of the wind turbine. In this paper, we present a proof of stability and convergence of ADTC theory with lower order estimator and show that the state feedback can be adaptive.
Target Information Processing: A Joint Decision and Estimation Approach
2012-03-29
ground targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important...targets ( track - before - detect ) using computer cluster and graphics processing unit. Estimation and filtering theory is one of the most important
A Multi-wavenumber Theory for Eddy Diffusivities: Applications to the DIMES Region
NASA Astrophysics Data System (ADS)
Chen, R.; Gille, S. T.; McClean, J.; Flierl, G.; Griesel, A.
2014-12-01
Climate models are sensitive to the representation of ocean mixing processes. This has motivated recent efforts to collect observations aimed at improving mixing estimates and parameterizations. The US/UK field program Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES), begun in 2009, is providing such estimates upstream of and within the Drake Passage. This region is characterized by topography, and strong zonal jets. In previous studies, mixing length theories, based on the assumption that eddies are dominated by a single wavenumber and phase speed, were formulated to represent the estimated mixing patterns in jets. However, in spite of the success of the single wavenumber theory in some other scenarios, it does not effectively predict the vertical structures of observed eddy diffusivities in the DIMES area. Considering that eddy motions encompass a wide range of wavenumbers, which all contribute to mixing, in this study we formulated a multi-wavenumber theory to predict eddy mixing rates. We test our theory for a domain encompassing the entire Southern Ocean. We estimated eddy diffusivities and mixing lengths from one million numerical floats in a global eddying model. These float-based mixing estimates were compared with the predictions from both the single-wavenumber and the multi-wavenumber theories. Our preliminary results in the DIMES area indicate that, compared to the single-wavenumber theory, the multi-wavenumber theory better predicts the vertical mixing structures in the vast areas where the mean flow is weak; however in the intense jet region, both theories have similar predictive skill.
Methods to Estimate the Variance of Some Indices of the Signal Detection Theory: A Simulation Study
ERIC Educational Resources Information Center
Suero, Manuel; Privado, Jesús; Botella, Juan
2017-01-01
A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters d and "C" of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, "d'" and "c." Those methods have been mostly assessed by…
Two Approaches to Estimation of Classification Accuracy Rate under Item Response Theory
ERIC Educational Resources Information Center
Lathrop, Quinn N.; Cheng, Ying
2013-01-01
Within the framework of item response theory (IRT), there are two recent lines of work on the estimation of classification accuracy (CA) rate. One approach estimates CA when decisions are made based on total sum scores, the other based on latent trait estimates. The former is referred to as the Lee approach, and the latter, the Rudner approach,…
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li
2013-01-01
In Ramsay curve item response theory (RC-IRT, Woods & Thissen, 2006) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's (1981) EM algorithm, which yields maximum marginal likelihood estimates. This method, however,…
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li
2014-01-01
In Ramsay curve item response theory (RC-IRT) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's EM algorithm, which yields maximum marginal likelihood estimates. This method, however, does not produce the…
Detecting Nonadditivity in Single-Facet Generalizability Theory Applications: Tukey's Test
ERIC Educational Resources Information Center
Lin, Chih-Kai; Zhang, Jinming
2018-01-01
Under the generalizability-theory (G-theory) framework, the estimation precision of variance components (VCs) is of significant importance in that they serve as the foundation of estimating reliability. Zhang and Lin advanced the discussion of nonadditivity in data from a theoretical perspective and showed the adverse effects of nonadditivity on…
Relationships between digital signal processing and control and estimation theory
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1978-01-01
Research areas associated with digital signal processing and control and estimation theory are identified. Particular attention is given to image processing, system identification problems (parameter identification, linear prediction, least squares, Kalman filtering), stability analyses (the use of the Liapunov theory, frequency domain criteria, passivity), and multiparameter systems, distributed processes, and random fields.
Estimating a Noncompensatory IRT Model Using Metropolis within Gibbs Sampling
ERIC Educational Resources Information Center
Babcock, Ben
2011-01-01
Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…
Stochastic processes, estimation theory and image enhancement
NASA Technical Reports Server (NTRS)
Assefi, T.
1978-01-01
An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.
Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory
NASA Astrophysics Data System (ADS)
Suliman, Mohamed; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.
2016-12-01
In this supplementary appendix we provide proofs and additional extensive simulations that complement the analysis of the main paper (constrained perturbation regularization approach for signal estimation using random matrix theory).
Item Response Theory Equating Using Bayesian Informative Priors.
ERIC Educational Resources Information Center
de la Torre, Jimmy; Patz, Richard J.
This paper seeks to extend the application of Markov chain Monte Carlo (MCMC) methods in item response theory (IRT) to include the estimation of equating relationships along with the estimation of test item parameters. A method is proposed that incorporates estimation of the equating relationship in the item calibration phase. Item parameters from…
Item Response Theory with Estimation of the Latent Density Using Davidian Curves
ERIC Educational Resources Information Center
Woods, Carol M.; Lin, Nan
2009-01-01
Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…
ERIC Educational Resources Information Center
Md Desa, Zairul Nor Deana
2012-01-01
In recent years, there has been increasing interest in estimating and improving subscore reliability. In this study, the multidimensional item response theory (MIRT) and the bi-factor model were combined to estimate subscores, to obtain subscores reliability, and subscores classification. Both the compensatory and partially compensatory MIRT…
ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
Measuring Marbles: Demonstrating the Basic Tenets of Measurement Theory
ERIC Educational Resources Information Center
Wininger, Steven R.
2007-01-01
A hands-on activity is described in which students attempt to measure something that they cannot see. In small groups, students estimate the number of marbles in sealed boxes. Next, students' estimates are compared with the actual numbers. Last, values from both the students' estimates and actual numbers are used to explain measurement theory and…
ERIC Educational Resources Information Center
Bulcock, J. W.; And Others
Multicollinearity refers to the presence of highly intercorrelated independent variables in structural equation models, that is, models estimated by using techniques such as least squares regression and maximum likelihood. There is a problem of multicollinearity in both the natural and social sciences where theory formulation and estimation is in…
Scatter Theories and Their Application to Lunar Radar Return
NASA Technical Reports Server (NTRS)
Hayre, H. S.
1961-01-01
The research work being done under this NASA grant is divided into the following three categories: (1) An estimate of the radar return for the NASA Aerobee rocket shot at White Sands Missile Range. (WSMR) (2) Development of new scatter theories, modification and correlation of existing scatter theories, and application of the theories to moon-echo data for estimation of the surface features of the moon. (3) Acoustic modeling of the lunar surface and correlation of the theoretical with both full scale and acoustical experimental results.
ERIC Educational Resources Information Center
Sass, D. A.; Schmitt, T. A.; Walker, C. M.
2008-01-01
Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…
ERIC Educational Resources Information Center
Hedeker, Donald; And Others
1996-01-01
Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example, M. Fishbein and I. Ajzen's theory of reasoned action is examined. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate individual influences…
Circumpolar Estimates of Isopycnal Mixing in the ACC from Argo Floats
NASA Astrophysics Data System (ADS)
Roach, C. J.; Balwada, D.; Speer, K. G.
2015-12-01
There are few direct observations of cross-stream isopycnal mixing in the interior of the Southern Ocean, yet such measurements are needed to determine the role of eddies transporting properties across the ACC, and key to progress toward testing theories of meridional overturning. In light of this we examine if it is possible to obtain estimates of mixing from Argo float trajectories. We divided the Southern Ocean into overlapping 15ο longitude bins before estimating mixing. Resulting diffusivities ranged from 300 to 3000 m2s-1, with peaks corresponding to the Scotia Sea; Kerguelen and Campbell Plateaus. Comparison of our diffusivities with previous regional studies demonstrated good agreement. Tests of the methodology in the DIMES region found that mixing from Argo floats agreed closely with mixing from RAFOS floats. To further test the method we used the Southern Ocean State Estimate velocity fields to advect particles with Argo and RAFOS float like behaviours. Stirring estimates from the particles agreed well with each other in the Kerguelen Island region, South Pacific and Scotia Sea, despite the differences in the imposed behaviour. Finally, these estimates were compared to mixing length suppression theory presented in Ferrari and Nikurashin 2010. This mixing length suppression theory quantifies horizontal diffusivity similar to Prandtl (1925), but the mixing length is suppressed in the presence of mean flows and eddy phase speeds. Our results suggest that the theory can explain both the structure and magnitude of mixing using mean flow data. An exception is near the Kerguelen and Campbell Plateaus where theory under-estimates mixing relative to our results.
High-pressure phase transitions - Examples of classical predictability
NASA Astrophysics Data System (ADS)
Celebonovic, Vladan
1992-09-01
The applicability of the Savic and Kasanin (1962-1967) classical theory of dense matter to laboratory experiments requiring estimates of high-pressure phase transitions was examined by determining phase transition pressures for a set of 19 chemical substances (including elements, hydrocarbons, metal oxides, and salts) for which experimental data were available. A comparison between experimental and transition points and those predicted by the Savic-Kasanin theory showed that the theory can be used for estimating values of transition pressures. The results also support conclusions obtained in previous astronomical applications of the Savic-Kasanin theory.
2007-03-01
Sabel, ir. C.A.M. van Moll, Datum TNO Defensie en Veiligheid TNO Defensie en Veiligheid maart 2007 Auteur (s) Programmatitel Projecttitel prof. dr. D.G...parameters by means of bottom reflection loss data derived from ambient noise. Matched field inversion is an application of inverse theory . Addressing... theory . Because inverse theory estimates the best fitting model, the uncertainty of this estimate should be specified. These topics are not investigated
The Estimation Theory Framework of Data Assimilation
NASA Technical Reports Server (NTRS)
Cohn, S.; Atlas, Robert (Technical Monitor)
2002-01-01
Lecture 1. The Estimation Theory Framework of Data Assimilation: 1. The basic framework: dynamical and observation models; 2. Assumptions and approximations; 3. The filtering, smoothing, and prediction problems; 4. Discrete Kalman filter and smoother algorithms; and 5. Example: A retrospective data assimilation system
ERIC Educational Resources Information Center
Lee, Guemin; Park, In-Yong
2012-01-01
Previous assessments of the reliability of test scores for testlet-composed tests have indicated that item-based estimation methods overestimate reliability. This study was designed to address issues related to the extent to which item-based estimation methods overestimate the reliability of test scores composed of testlets and to compare several…
Profile-likelihood Confidence Intervals in Item Response Theory Models.
Chalmers, R Philip; Pek, Jolynn; Liu, Yang
2017-01-01
Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.
Identification of dynamic systems, theory and formulation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1985-01-01
The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.
Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee
2013-07-01
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.
Allometric scaling theory applied to FIA biomass estimation
David C. Chojnacky
2002-01-01
Tree biomass estimates in the Forest Inventory and Analysis (FIA) database are derived from numerous methodologies whose abundance and complexity raise questions about consistent results throughout the U.S. A new model based on allometric scaling theory ("WBE") offers simplified methodology and a theoretically sound basis for improving the reliability and...
Reliability of Test Scores in Nonparametric Item Response Theory.
ERIC Educational Resources Information Center
Sijtsma, Klaas; Molenaar, Ivo W.
1987-01-01
Three methods for estimating reliability are studied within the context of nonparametric item response theory. Two were proposed originally by Mokken and a third is developed in this paper. Using a Monte Carlo strategy, these three estimation methods are compared with four "classical" lower bounds to reliability. (Author/JAZ)
Signal Estimation, Inverse Scattering, and Problems in One and Two Dimensions.
1982-11-01
attention to implication for new estimation algorithms and signal processing and, to a lesser extent, for system theory . The publications resulting...from the work are listed by category and date. They are briefly organized and reviewed under five major headings: (1) Two-Dimensional System Theory ; (2
Dual-Process Theory and Signal-Detection Theory of Recognition Memory
ERIC Educational Resources Information Center
Wixted, John T.
2007-01-01
Two influential models of recognition memory, the unequal-variance signal-detection model and a dual-process threshold/detection model, accurately describe the receiver operating characteristic, but only the latter model can provide estimates of recollection and familiarity. Such estimates often accord with those provided by the remember-know…
ERIC Educational Resources Information Center
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry
2015-01-01
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
ERIC Educational Resources Information Center
Rindermann, Heiner; te Nijenhuis, Jan
2012-01-01
A high-quality estimate of the mean IQ of a country requires giving a well-validated test to a nationally representative sample, which usually is not feasible in developing countries. So, we used a convenience sample and four corrections based on theory and empirical findings to arrive at a good-quality estimate of the mean IQ in Bali. Our study…
Spatial estimation from remotely sensed data via empirical Bayes models
NASA Technical Reports Server (NTRS)
Hill, J. R.; Hinkley, D. V.; Kostal, H.; Morris, C. N.
1984-01-01
Multichannel satellite image data, available as LANDSAT imagery, are recorded as a multivariate time series (four channels, multiple passovers) in two spatial dimensions. The application of parametric empirical Bayes theory to classification of, and estimating the probability of, each crop type at each of a large number of pixels is considered. This theory involves both the probability distribution of imagery data, conditional on crop types, and the prior spatial distribution of crop types. For the latter Markov models indexed by estimable parameters are used. A broad outline of the general theory reveals several questions for further research. Some detailed results are given for the special case of two crop types when only a line transect is analyzed. Finally, the estimation of an underlying continuous process on the lattice is discussed which would be applicable to such quantities as crop yield.
Upscaling gas permeability in tight-gas sandstones
NASA Astrophysics Data System (ADS)
Ghanbarian, B.; Torres-Verdin, C.; Lake, L. W.; Marder, M. P.
2017-12-01
Klinkenberg-corrected gas permeability (k) estimation in tight-gas sandstones is essential for gas exploration and production in low-permeability porous rocks. Most models for estimating k are a function of porosity (ϕ), tortuosity (τ), pore shape factor (s) and a characteristic length scale (lc). Estimation of the latter, however, has been the subject of debate in the literature. Here we invoke two different upscaling approaches from statistical physics: (1) the EMA and (2) critical path analysis (CPA) to estimate lc from pore throat-size distribution derived from mercury intrusion capillary pressure (MICP) curve. τ is approximated from: (1) concepts of percolation theory and (2) formation resistivity factor measurements (F = τ/ϕ). We then estimate k of eighteen tight-gas sandstones from lc, τ, and ϕ by assuming two different pore shapes: cylindrical and slit-shaped. Comparison with Klinkenberg-corrected k measurements showed that τ was estimated more accurately from F measurements than from percolation theory. Generally speaking, our results implied that the EMA estimated k within a factor of two of the measurements and more precisely than CPA. We further found that the assumption of cylindrical pores yielded more accurate k estimates when τ was estimated from concepts of percolation theory than the assumption of slit-shaped pores. However, the EMA with slit-shaped pores estimated k more precisely than that with cylindrical pores when τ was estimated from F measurements.
Item response theory - A first approach
NASA Astrophysics Data System (ADS)
Nunes, Sandra; Oliveira, Teresa; Oliveira, Amílcar
2017-07-01
The Item Response Theory (IRT) has become one of the most popular scoring frameworks for measurement data, frequently used in computerized adaptive testing, cognitively diagnostic assessment and test equating. According to Andrade et al. (2000), IRT can be defined as a set of mathematical models (Item Response Models - IRM) constructed to represent the probability of an individual giving the right answer to an item of a particular test. The number of Item Responsible Models available to measurement analysis has increased considerably in the last fifteen years due to increasing computer power and due to a demand for accuracy and more meaningful inferences grounded in complex data. The developments in modeling with Item Response Theory were related with developments in estimation theory, most remarkably Bayesian estimation with Markov chain Monte Carlo algorithms (Patz & Junker, 1999). The popularity of Item Response Theory has also implied numerous overviews in books and journals, and many connections between IRT and other statistical estimation procedures, such as factor analysis and structural equation modeling, have been made repeatedly (Van der Lindem & Hambleton, 1997). As stated before the Item Response Theory covers a variety of measurement models, ranging from basic one-dimensional models for dichotomously and polytomously scored items and their multidimensional analogues to models that incorporate information about cognitive sub-processes which influence the overall item response process. The aim of this work is to introduce the main concepts associated with one-dimensional models of Item Response Theory, to specify the logistic models with one, two and three parameters, to discuss some properties of these models and to present the main estimation procedures.
Tools of Robustness for Item Response Theory.
ERIC Educational Resources Information Center
Jones, Douglas H.
This paper briefly demonstrates a few of the possibilities of a systematic application of robustness theory, concentrating on the estimation of ability when the true item response model does and does not fit the data. The definition of the maximum likelihood estimator (MLE) of ability is briefly reviewed. After introducing the notion of…
IRTPRO 2.1 for Windows (Item Response Theory for Patient-Reported Outcomes)
ERIC Educational Resources Information Center
Paek, Insu; Han, Kyung T.
2013-01-01
This article reviews a new item response theory (IRT) model estimation program, IRTPRO 2.1, for Windows that is capable of unidimensional and multidimensional IRT model estimation for existing and user-specified constrained IRT models for dichotomously and polytomously scored item response data. (Contains 1 figure and 2 notes.)
Jiang, Zhehan; Skorupski, William
2017-12-12
In many behavioral research areas, multivariate generalizability theory (mG theory) has been typically used to investigate the reliability of certain multidimensional assessments. However, traditional mG-theory estimation-namely, using frequentist approaches-has limits, leading researchers to fail to take full advantage of the information that mG theory can offer regarding the reliability of measurements. Alternatively, Bayesian methods provide more information than frequentist approaches can offer. This article presents instructional guidelines on how to implement mG-theory analyses in a Bayesian framework; in particular, BUGS code is presented to fit commonly seen designs from mG theory, including single-facet designs, two-facet crossed designs, and two-facet nested designs. In addition to concrete examples that are closely related to the selected designs and the corresponding BUGS code, a simulated dataset is provided to demonstrate the utility and advantages of the Bayesian approach. This article is intended to serve as a tutorial reference for applied researchers and methodologists conducting mG-theory studies.
An estimator-predictor approach to PLL loop filter design
NASA Technical Reports Server (NTRS)
Statman, J. I.; Hurd, W. J.
1986-01-01
An approach to the design of digital phase locked loops (DPLLs), using estimation theory concepts in the selection of a loop filter, is presented. The key concept is that the DPLL closed-loop transfer function is decomposed into an estimator and a predictor. The estimator provides recursive estimates of phase, frequency, and higher order derivatives, while the predictor compensates for the transport lag inherent in the loop. This decomposition results in a straightforward loop filter design procedure, enabling use of techniques from optimal and sub-optimal estimation theory. A design example for a particular choice of estimator is presented, followed by analysis of the associated bandwidth, gain margin, and steady state errors caused by unmodeled dynamics. This approach is under consideration for the design of the Deep Space Network (DSN) Advanced Receiver Carrier DPLL.
ERIC Educational Resources Information Center
Yelboga, Atilla; Tavsancil, Ezel
2010-01-01
In this research, the classical test theory and generalizability theory analyses were carried out with the data obtained by a job performance scale for the years 2005 and 2006. The reliability coefficients obtained (estimated) from the classical test theory and generalizability theory analyses were compared. In classical test theory, test retest…
Kappesser, Judith; de C Williams, Amanda C
2008-08-01
Observer underestimation of others' pain was studied using a concept from evolutionary psychology: a cheater detection mechanism from social contract theory, applied to relatives and friends of chronic pain patients. 127 participants estimated characters' pain intensity and fairness of behaviour after reading four vignettes describing characters suffering from pain. Four cues were systematically varied: the character continuing or stopping liked tasks; continuing or stopping disliked tasks; availability of medical evidence; and pain intensity as rated by characters. Results revealed that pain intensity and the two behavioural variables had an effect on pain estimates: high pain self-reports and stopping all tasks led to high pain estimates; pain was estimated to be lowest when characters stopped disliked but continued with liked tasks. This combination was also rated least fair. Results support the use of social contract theory as a theoretical framework to explore pain judgements.
The current status of REH theory. [Random Evolutionary Hits in biological molecular evolution
NASA Technical Reports Server (NTRS)
Holmquist, R.; Jukes, T. H.
1981-01-01
A response is made to the evaluation of Fitch (1980) of REH (random evolutionary hits) theory for the evolutionary divergence of proteins and nucleic acids. Correct calculations for the beta hemoglobin mRNAs of the human, mouse and rabbit in the absence and presence of selective constraints are summarized, and it is shown that the alternative evolutionary analysis of Fitch underestimates the total fixed mutations. It is further shown that the model used by Fitch to test for the completeness of the count of total base substitutions is in fact a variant of REH theory. Considerations of the variance inherent in evolutionary estimations are also presented which show the REH model to produce no more variance than other evolutionary models. In the reply, it is argued that, despite the objections raised, REH theory applied to proteins gives inaccurate estimates of total gene substitutions. It is further contended that REH theory developed for nucleic sequences suffers from problems relating to the frequency of nucleotide substitutions, the identity of the codons accepting silent and amino acid-changing substitutions, and estimate uncertainties.
Asymptotic stability estimates near an equilibrium point
NASA Astrophysics Data System (ADS)
Dumas, H. Scott; Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia
2017-07-01
We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna [3] to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.
Psychometric Properties of IRT Proficiency Estimates
ERIC Educational Resources Information Center
Kolen, Michael J.; Tong, Ye
2010-01-01
Psychometric properties of item response theory proficiency estimates are considered in this paper. Proficiency estimators based on summed scores and pattern scores include non-Bayes maximum likelihood and test characteristic curve estimators and Bayesian estimators. The psychometric properties investigated include reliability, conditional…
ERIC Educational Resources Information Center
DeMars, Christine E.
2012-01-01
In structural equation modeling software, either limited-information (bivariate proportions) or full-information item parameter estimation routines could be used for the 2-parameter item response theory (IRT) model. Limited-information methods assume the continuous variable underlying an item response is normally distributed. For skewed and…
Ramsay-Curve Item Response Theory for the Three-Parameter Logistic Item Response Model
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…
Characterizing Sources of Uncertainty in Item Response Theory Scale Scores
ERIC Educational Resources Information Center
Yang, Ji Seung; Hansen, Mark; Cai, Li
2012-01-01
Traditional estimators of item response theory scale scores ignore uncertainty carried over from the item calibration process, which can lead to incorrect estimates of the standard errors of measurement (SEMs). Here, the authors review a variety of approaches that have been applied to this problem and compare them on the basis of their statistical…
Estimation of Item Response Theory Parameters in the Presence of Missing Data
ERIC Educational Resources Information Center
Finch, Holmes
2008-01-01
Missing data are a common problem in a variety of measurement settings, including responses to items on both cognitive and affective assessments. Researchers have shown that such missing data may create problems in the estimation of item difficulty parameters in the Item Response Theory (IRT) context, particularly if they are ignored. At the same…
ERIC Educational Resources Information Center
Kelderman, Henk
1992-01-01
Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…
Two Prophecy Formulas for Assessing the Reliability of Item Response Theory-Based Ability Estimates
ERIC Educational Resources Information Center
Raju, Nambury S.; Oshima, T.C.
2005-01-01
Two new prophecy formulas for estimating item response theory (IRT)-based reliability of a shortened or lengthened test are proposed. Some of the relationships between the two formulas, one of which is identical to the well-known Spearman-Brown prophecy formula, are examined and illustrated. The major assumptions underlying these formulas are…
Observed Score and True Score Equating Procedures for Multidimensional Item Response Theory
ERIC Educational Resources Information Center
Brossman, Bradley Grant
2010-01-01
The purpose of this research was to develop observed score and true score equating procedures to be used in conjunction with the Multidimensional Item Response Theory (MIRT) framework. Currently, MIRT scale linking procedures exist to place item parameter estimates and ability estimates on the same scale after separate calibrations are conducted.…
ERIC Educational Resources Information Center
Bilir, Mustafa Kuzey
2009-01-01
This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…
Using SAS PROC MCMC for Item Response Theory Models
ERIC Educational Resources Information Center
Ames, Allison J.; Samonte, Kelli
2015-01-01
Interest in using Bayesian methods for estimating item response theory models has grown at a remarkable rate in recent years. This attentiveness to Bayesian estimation has also inspired a growth in available software such as WinBUGS, R packages, BMIRT, MPLUS, and SAS PROC MCMC. This article intends to provide an accessible overview of Bayesian…
Nitrogen nucleation in a cryogenic supersonic nozzle
NASA Astrophysics Data System (ADS)
Bhabhe, Ashutosh; Wyslouzil, Barbara
2011-12-01
We follow the vapor-liquid phase transition of N2 in a cryogenic supersonic nozzle apparatus using static pressure measurements. Under our operating conditions, condensation always occurs well below the triple point. Mean field kinetic nucleation theory (MKNT) does a better job of predicting the conditions corresponding to the estimated maximum nucleation rates, Jmax = 1017±1 cm-3 s-1, than two variants of classical nucleation theory. Combining the current results with the nucleation pulse chamber measurements of Iland et al. [J. Chem. Phys. 130, 114508-1 (2009)], we use nucleation theorems to estimate the critical cluster properties. Both the theories overestimate the size of the critical cluster, but MKNT does a good job of estimating the excess internal energy of the clusters.
Optimal Measurements for Simultaneous Quantum Estimation of Multiple Phases
NASA Astrophysics Data System (ADS)
Pezzè, Luca; Ciampini, Mario A.; Spagnolo, Nicolò; Humphreys, Peter C.; Datta, Animesh; Walmsley, Ian A.; Barbieri, Marco; Sciarrino, Fabio; Smerzi, Augusto
2017-09-01
A quantum theory of multiphase estimation is crucial for quantum-enhanced sensing and imaging and may link quantum metrology to more complex quantum computation and communication protocols. In this Letter, we tackle one of the key difficulties of multiphase estimation: obtaining a measurement which saturates the fundamental sensitivity bounds. We derive necessary and sufficient conditions for projective measurements acting on pure states to saturate the ultimate theoretical bound on precision given by the quantum Fisher information matrix. We apply our theory to the specific example of interferometric phase estimation using photon number measurements, a convenient choice in the laboratory. Our results thus introduce concepts and methods relevant to the future theoretical and experimental development of multiparameter estimation.
Tensor modes on the string theory landscape
NASA Astrophysics Data System (ADS)
Westphal, Alexander
2013-04-01
We attempt an estimate for the distribution of the tensor mode fraction r over the landscape of vacua in string theory. The dynamics of eternal inflation and quantum tunneling lead to a kind of democracy on the landscape, providing no bias towards large-field or small-field inflation regardless of the class of measure. The tensor mode fraction then follows the number frequency distributions of inflationary mechanisms of string theory over the landscape. We show that an estimate of the relative number frequencies for small-field vs large-field inflation, while unattainable on the whole landscape, may be within reach as a regional answer for warped Calabi-Yau flux compactifications of type IIB string theory.
Estimating population size with correlated sampling unit estimates
David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey
2003-01-01
Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on markârecapture or distance sampling methods occur...
ERIC Educational Resources Information Center
Klinger, Don A.; Rogers, W. Todd
2003-01-01
The estimation accuracy of procedures based on classical test score theory and item response theory (generalized partial credit model) were compared for examinations consisting of multiple-choice and extended-response items. Analysis of British Columbia Scholarship Examination results found an error rate of about 10 percent for both methods, with…
ERIC Educational Resources Information Center
Marcoulides, Katerina M.
2018-01-01
This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…
Consequences of Ignoring Guessing when Estimating the Latent Density in Item Response Theory
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…
ERIC Educational Resources Information Center
Schochet, Peter Z.
2015-01-01
This report presents the statistical theory underlying the "RCT-YES" software that estimates and reports impacts for RCTs for a wide range of designs used in social policy research. The report discusses a unified, non-parametric design-based approach for impact estimation using the building blocks of the Neyman-Rubin-Holland causal…
The application of mean field theory to image motion estimation.
Zhang, J; Hanauer, G G
1995-01-01
Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.
Probability Theory Plus Noise: Descriptive Estimation and Inferential Judgment.
Costello, Fintan; Watts, Paul
2018-01-01
We describe a computational model of two central aspects of people's probabilistic reasoning: descriptive probability estimation and inferential probability judgment. This model assumes that people's reasoning follows standard frequentist probability theory, but it is subject to random noise. This random noise has a regressive effect in descriptive probability estimation, moving probability estimates away from normative probabilities and toward the center of the probability scale. This random noise has an anti-regressive effect in inferential judgement, however. These regressive and anti-regressive effects explain various reliable and systematic biases seen in people's descriptive probability estimation and inferential probability judgment. This model predicts that these contrary effects will tend to cancel out in tasks that involve both descriptive estimation and inferential judgement, leading to unbiased responses in those tasks. We test this model by applying it to one such task, described by Gallistel et al. ). Participants' median responses in this task were unbiased, agreeing with normative probability theory over the full range of responses. Our model captures the pattern of unbiased responses in this task, while simultaneously explaining systematic biases away from normatively correct probabilities seen in other tasks. Copyright © 2018 Cognitive Science Society, Inc.
Hedeker, D; Flay, B R; Petraitis, J
1996-02-01
Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example of the methods, M. Fishbein and I. Ajzen's (1975; I. Ajzen & M. Fishbein, 1980) theory of reasoned action is examined, which posits first that an individual's behavioral intentions are a function of 2 components: the individual's attitudes toward the behavior and the subjective norms as perceived by the individual. A second component of their theory is that individuals may weight these 2 components differently in assessing their behavioral intentions. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate these individual influences, estimating an individual's weighting of both of these components (attitudes toward the behavior and subjective norms) in relation to their behavioral intentions. This method can be used when an individual's behavioral intentions, subjective norms, and attitudes toward the behavior are all repeatedly measured. In this case, the empirical Bayes estimates are derived as a function of the data from the individual, strengthened by the overall sample data.
NASA Astrophysics Data System (ADS)
Wang, Kaicun; Dickinson, Robert E.
2012-06-01
This review surveys the basic theories, observational methods, satellite algorithms, and land surface models for terrestrial evapotranspiration, E (or λE, i.e., latent heat flux), including a long-term variability and trends perspective. The basic theories used to estimate E are the Monin-Obukhov similarity theory (MOST), the Bowen ratio method, and the Penman-Monteith equation. The latter two theoretical expressions combine MOST with surface energy balance. Estimates of E can differ substantially between these three approaches because of their use of different input data. Surface and satellite-based measurement systems can provide accurate estimates of diurnal, daily, and annual variability of E. But their estimation of longer time variability is largely not established. A reasonable estimate of E as a global mean can be obtained from a surface water budget method, but its regional distribution is still rather uncertain. Current land surface models provide widely different ratios of the transpiration by vegetation to total E. This source of uncertainty therefore limits the capability of models to provide the sensitivities of E to precipitation deficits and land cover change.
The Probabilities of Unique Events
Khemlani, Sangeet S.; Lotstein, Max; Johnson-Laird, Phil
2012-01-01
Many theorists argue that the probabilities of unique events, even real possibilities such as President Obama's re-election, are meaningless. As a consequence, psychologists have seldom investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of conjunctions should often tend to split the difference between the probabilities of the two conjuncts. We report two experiments showing that individuals commit such violations of the probability calculus, and corroborating other predictions of the theory, e.g., individuals err in the same way even when they make non-numerical verbal estimates, such as that an event is highly improbable. PMID:23056224
A subjective utilitarian theory of moral judgment.
Cohen, Dale J; Ahn, Minwoo
2016-10-01
Current theories hypothesize that moral judgments are difficult because rational and emotional decision processes compete. We present a fundamentally different theory of moral judgment: the Subjective Utilitarian Theory of moral judgment. The Subjective Utilitarian Theory posits that people try to identify and save the competing item with the greatest "personal value." Moral judgments become difficult only when the competing items have similar personal values. In Experiment 1, we estimate the personal values of 104 items. In Experiments 2-5, we show that the distributional overlaps of the estimated personal values account for over 90% of the variance in reaction times (RTs) and response choices in a moral judgment task. Our model fundamentally restructures our understanding of moral judgments from a competition between decision processes to a competition between similarly valued items. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The theory precision analyse of RFM localization of satellite remote sensing imagery
NASA Astrophysics Data System (ADS)
Zhang, Jianqing; Xv, Biao
2009-11-01
The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.
1990-02-21
LIDS-P-1953 Multiscale System Theory Albert Benveniste IRISA-INRIA, Campus de Beaulieu 35042 RENNES CEDEX, FRANCE Ramine Nikoukhah INRIA...TITLE AND SUBTITLE Multiscale System Theory 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...the development of a corresponding system theory and a theory of stochastic processes and their estimation. The research presented in this and several
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
ERIC Educational Resources Information Center
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
Transmission overhaul and replacement predictions using Weibull and renewel theory
NASA Technical Reports Server (NTRS)
Savage, M.; Lewicki, D. G.
1989-01-01
A method to estimate the frequency of transmission overhauls is presented. This method is based on the two-parameter Weibull statistical distribution for component life. A second method is presented to estimate the number of replacement components needed to support the transmission overhaul pattern. The second method is based on renewal theory. Confidence statistics are applied with both methods to improve the statistical estimate of sample behavior. A transmission example is also presented to illustrate the use of the methods. Transmission overhaul frequency and component replacement calculations are included in the example.
Space-Time Error Representation and Estimation in Navier-Stokes Calculations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2006-01-01
The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.
Magis, David
2014-11-01
In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.
Discrete-to-continuous transition in quantum phase estimation
NASA Astrophysics Data System (ADS)
Rządkowski, Wojciech; Demkowicz-Dobrzański, Rafał
2017-09-01
We analyze the problem of quantum phase estimation in which the set of allowed phases forms a discrete N -element subset of the whole [0 ,2 π ] interval, φn=2 π n /N , n =0 ,⋯,N -1 , and study the discrete-to-continuous transition N →∞ for various cost functions as well as the mutual information. We also analyze the relation between the problems of phase discrimination and estimation by considering a step cost function of a given width σ around the true estimated value. We show that in general a direct application of the theory of covariant measurements for a discrete subgroup of the U(1 ) group leads to suboptimal strategies due to an implicit requirement of estimating only the phases that appear in the prior distribution. We develop the theory of subcovariant measurements to remedy this situation and demonstrate truly optimal estimation strategies when performing a transition from discrete to continuous phase estimation.
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
The Probabilities of Unique Events
2012-08-30
social justice and also participated in antinuclear demonstrations. The participants ranked the probability that Linda is a feminist bank teller as...investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only...of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of
ASYMPTOTIC DISTRIBUTION OF ΔAUC, NRIs, AND IDI BASED ON THEORY OF U-STATISTICS
Demler, Olga V.; Pencina, Michael J.; Cook, Nancy R.; D’Agostino, Ralph B.
2017-01-01
The change in AUC (ΔAUC), the IDI, and NRI are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues we unite the ΔAUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ΔAUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ΔAUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ΔAUC, NRIs, or IDI. In the former case SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ΔAUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ΔAUC. PMID:28627112
Asymptotic distribution of ∆AUC, NRIs, and IDI based on theory of U-statistics.
Demler, Olga V; Pencina, Michael J; Cook, Nancy R; D'Agostino, Ralph B
2017-09-20
The change in area under the curve (∆AUC), the integrated discrimination improvement (IDI), and net reclassification index (NRI) are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues, we unite the ∆AUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ∆AUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ∆AUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ∆AUC, NRIs, or IDI. In the former case, SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ∆AUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ∆AUC. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Revisiting Boundary Perturbation Theory for Inhomogeneous Transport Problems
Favorite, Jeffrey A.; Gonzalez, Esteban
2017-03-10
Adjoint-based first-order perturbation theory is applied again to boundary perturbation problems. Rahnema developed a perturbation estimate that gives an accurate first-order approximation of a flux or reaction rate within a radioactive system when the boundary is perturbed. When the response of interest is the flux or leakage current on the boundary, the Roussopoulos perturbation estimate has long been used. The Rahnema and Roussopoulos estimates differ in one term. Our paper shows that the Rahnema and Roussopoulos estimates can be derived consistently, using different responses, from a single variational functional (due to Gheorghiu and Rahnema), resolving any apparent contradiction. In analyticmore » test problems, Rahnema’s estimate and the Roussopoulos estimate produce exact first derivatives of the response of interest when appropriately applied. We also present a realistic, nonanalytic test problem.« less
Estimations of expectedness and potential surprise in possibility theory
NASA Technical Reports Server (NTRS)
Prade, Henri; Yager, Ronald R.
1992-01-01
This note investigates how various ideas of 'expectedness' can be captured in the framework of possibility theory. Particularly, we are interested in trying to introduce estimates of the kind of lack of surprise expressed by people when saying 'I would not be surprised that...' before an event takes place, or by saying 'I knew it' after its realization. In possibility theory, a possibility distribution is supposed to model the relative levels of mutually exclusive alternatives in a set, or equivalently, the alternatives are assumed to be rank-ordered according to their level of possibility to take place. Four basic set-functions associated with a possibility distribution, including standard possibility and necessity measures, are discussed from the point of view of what they estimate when applied to potential events. Extensions of these estimates based on the notions of Q-projection or OWA operators are proposed when only significant parts of the possibility distribution are retained in the evaluation. The case of partially-known possibility distributions is also considered. Some potential applications are outlined.
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2012-01-01
This paper focuses on two estimators of ability with logistic item response theory models: the Bayesian modal (BM) estimator and the weighted likelihood (WL) estimator. For the BM estimator, Jeffreys' prior distribution is considered, and the corresponding estimator is referred to as the Jeffreys modal (JM) estimator. It is established that under…
Intuitive theories of information: beliefs about the value of redundancy.
Soll, J B
1999-03-01
In many situations, quantity estimates from multiple experts or diagnostic instruments must be collected and combined. Normatively, and all else equal, one should value information sources that are nonredundant, in the sense that correlation in forecast errors should be minimized. Past research on the preference for redundancy has been inconclusive. While some studies have suggested that people correctly place higher value on uncorrelated inputs when collecting estimates, others have shown that people either ignore correlation or, in some cases, even prefer it. The present experiments show that the preference for redundancy depends on one's intuitive theory of information. The most common intuitive theory identified is the Error Tradeoff Model (ETM), which explicitly distinguishes between measurement error and bias. According to ETM, measurement error can only be averaged out by consulting the same source multiple times (normatively false), and bias can only be averaged out by consulting different sources (normatively true). As a result, ETM leads people to prefer redundant estimates when the ratio of measurement error to bias is relatively high. Other participants favored different theories. Some adopted the normative model, while others were reluctant to mathematically average estimates from different sources in any circumstance. In a post hoc analysis, science majors were more likely than others to subscribe to the normative model. While tentative, this result lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized. Copyright 1999 Academic Press.
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
ERIC Educational Resources Information Center
Kim, Sooyeon; Livingston, Samuel A.
2017-01-01
The purpose of this simulation study was to assess the accuracy of a classical test theory (CTT)-based procedure for estimating the alternate-forms reliability of scores on a multistage test (MST) having 3 stages. We generated item difficulty and discrimination parameters for 10 parallel, nonoverlapping forms of the complete 3-stage test and…
ERIC Educational Resources Information Center
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.
2006-01-01
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
Probabilistic models in human sensorimotor control
Wolpert, Daniel M.
2009-01-01
Sensory and motor uncertainty form a fundamental constraint on human sensorimotor control. Bayesian decision theory (BDT) has emerged as a unifying framework to understand how the central nervous system performs optimal estimation and control in the face of such uncertainty. BDT has two components: Bayesian statistics and decision theory. Here we review Bayesian statistics and show how it applies to estimating the state of the world and our own body. Recent results suggest that when learning novel tasks we are able to learn the statistical properties of both the world and our own sensory apparatus so as to perform estimation using Bayesian statistics. We review studies which suggest that humans can combine multiple sources of information to form maximum likelihood estimates, can incorporate prior beliefs about possible states of the world so as to generate maximum a posteriori estimates and can use Kalman filter-based processes to estimate time-varying states. Finally, we review Bayesian decision theory in motor control and how the central nervous system processes errors to determine loss functions and optimal actions. We review results that suggest we plan movements based on statistics of our actions that result from signal-dependent noise on our motor outputs. Taken together these studies provide a statistical framework for how the motor system performs in the presence of uncertainty. PMID:17628731
A method for estimating the rolling moment due to spin rate for arbitrary planform wings
NASA Technical Reports Server (NTRS)
Poppen, W. A., Jr.
1985-01-01
The application of aerodynamic theory for estimating the force and moments acting upon spinning airplanes is of interest. For example, strip theory has been used to generate estimates of the aerodynamic characteristics as a function of spin rate for wing-dominated configurations for angles of attack up to 90 degrees. This work, which had been limited to constant chord wings, is extended here to wings comprised of tapered segments. Comparison of the analytical predictions with rotary balance wind tunnel results shows that large discrepancies remain, particularly for those angles-of-attack greater than 40 degrees.
The application of the statistical theory of extreme values to gust-load problems
NASA Technical Reports Server (NTRS)
Press, Harry
1950-01-01
An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)
Maximum drag reduction asymptotes and the cross-over to the Newtonian plug
NASA Astrophysics Data System (ADS)
Benzi, R.; de Angelis, E.; L'Vov, V. S.; Procaccia, I.; Tiberkevich, V.
2006-03-01
We employ the full FENE-P model of the hydrodynamics of a dilute polymer solution to derive a theoretical approach to drag reduction in wall-bounded turbulence. We recapture the results of a recent simplified theory which derived the universal maximum drag reduction (MDR) asymptote, and complement that theory with a discussion of the cross-over from the MDR to the Newtonian plug when the drag reduction saturates. The FENE-P model gives rise to a rather complex theory due to the interaction of the velocity field with the polymeric conformation tensor, making analytic estimates quite taxing. To overcome this we develop the theory in a computer-assisted manner, checking at each point the analytic estimates by direct numerical simulations (DNS) of viscoelastic turbulence in a channel.
Best Design for Multidimensional Computerized Adaptive Testing With the Bifactor Model
Seo, Dong Gi; Weiss, David J.
2015-01-01
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm (MCAT) with a bifactor model using simulated data. Four item selection methods in MCAT were examined for three bifactor pattern designs using two multidimensional item response theory models. To compare MCAT item selection and estimation methods, a fixed test length was used. The Ds-optimality item selection improved θ estimates with respect to a general factor, and either D- or A-optimality improved estimates of the group factors in three bifactor pattern designs under two multidimensional item response theory models. The MCAT model without a guessing parameter functioned better than the MCAT model with a guessing parameter. The MAP (maximum a posteriori) estimation method provided more accurate θ estimates than the EAP (expected a posteriori) method under most conditions, and MAP showed lower observed standard errors than EAP under most conditions, except for a general factor condition using Ds-optimality item selection. PMID:29795848
Economic, demographic and social factors of energy demand in Mexican households, 2008-2014
NASA Astrophysics Data System (ADS)
Perez Pena, Rafael
This research project focuses on estimating the effect of economic, demographic, and social factors in residential energy demand in Mexico from 2008 to 2014. Therefore, it estimates demand equations for electricity, natural gas, liquefied petroleum gas (LPG), coal and natural gas using Mexican household data from 2008 to 2014. It also applies accessibility theory and it estimates energy access indicators using different specifications of demand for LPG in 2014. Sprawl measures, gravity model, and central place theory are the accessibility theory supporting the energy access indicators. Results suggest the greater the household income, the population size, the educational level of the householder, the energy access, and the lower the energy price and the household size, the greater the demand for energy in Mexico from 2008 to 2014. The greater the education, the lower the demand for firewood and coal. LPG and firewood have a monopolistically competitive market structure. Energy access indicators informed by accessibility theory are statistically significant and show the expected sign when applied to LPG in Mexican household in 2014.
Testing 40 Predictions from the Transtheoretical Model Again, with Confidence
ERIC Educational Resources Information Center
Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.
2013-01-01
Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…
ERIC Educational Resources Information Center
Stenner, A. Jackson; Rohlf, Richard J.
The merits of generalizability theory in the formulation of construct definitions and in the determination of reliability estimates are discussed. The broadened conceptualization of reliability brought about by Cronbach's generalizability theory is reviewed. Career Maturity Inventory data from a sample of 60 ninth grade students is used to…
A Test of Durkheim's Theory of Suicide in Primitive Societies.
ERIC Educational Resources Information Center
Lester, David
1992-01-01
Classified primitive societies as high, moderate, or low on independent measures of social integration and social regulation to test Durkheim's theory of suicide. Estimated frequency of suicide did not differ between those societies predicted to have high, moderate, and low suicide rates. Durkheim's theory was not confirmed. (Author/NB)
Flight Mechanics/Estimation Theory Symposium, 1989
NASA Technical Reports Server (NTRS)
Stengle, Thomas (Editor)
1989-01-01
Numerous topics in flight mechanics and estimation were discussed. Satellite attitude control, quaternion estimation, orbit and attitude determination, spacecraft maneuvers, spacecraft navigation, gyroscope calibration, spacecraft rendevous, and atmospheric drag model calculations for spacecraft lifetime prediction are among the topics covered.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-06-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-01-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560
Development of Additional Hazard Assessment Models
1977-03-01
globules, their trajectory (the distance from the spill point to the impact point on the river bed), and the time required for sinking. Established theories ...chemicals, the dissolution rate is estimated by using eddy diffusivity surface renewal theories . The validity of predictions of these theories has been... theories and experimental data on aeration of rivers. * Describe dispersion in rivers with stationary area source and sources moving with the stream
Probabilistic Estimation of Rare Random Collisions in 3 Space
2009-03-01
extended Poisson process as a feature of probability theory. With the bulk of research in extended Poisson processes going into parame- ter estimation, the...application of extended Poisson processes to spatial processes is largely untouched. Faddy performed a short study of spatial data, but overtly...the theory of extended Poisson processes . To date, the processes are limited in that the rates only depend on the number of arrivals at some time
2003-04-01
gener- ally considered to be passive data . Instead the genetic material should be capable of being algorith - mic information, that is, program code or...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other
NASA Astrophysics Data System (ADS)
Tikhonov, D. A.; Sobolev, E. V.
2011-04-01
A method of integral equations of the theory of liquids in the reference interaction site model (RISM) approximation is used to estimate the Gibbs energy averaged over equilibrium trajectories computed by molecular mechanics. Peptide oxytocin is selected as the object of interest. The Gibbs energy is calculated using all chemical potential formulas introduced in the RISM approach for the excess chemical potential of solvation and is compared with estimates by the generalized Born model. Some formulas are shown to give the wrong sign of Gibbs energy changes when peptide passes from the gas phase into water environment; the other formulas give overestimated Gibbs energy changes with the right sign. Note that allowance for the repulsive correction in the approximate analytical expressions for the Gibbs energy derived by thermodynamic perturbation theory is not a remedy.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1983-01-01
Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.
Correlations in polymer blends: Simulations, perturbation theory, and coarse-grained theory
NASA Astrophysics Data System (ADS)
Chung, Jun Kyung
A thermodynamic perturbation theory of symmetric polymer blends is developed that properly accounts for the correlation in the spatial arrangement of monomers. By expanding the free energy of mixing in powers of a small parameter alpha which controls the incompatibility of two monomer species, we show that the perturbation theory has the form of the original Flory-Huggins theory, to first order in alpha. However, the lattice coordination number in the original theory is replaced by an effective coordination number. A random walk model for the effective coordination number is found to describe Monte Carlo simulation data very well. We also propose a way to estimate Flory-Huggins chi parameter by extrapolating the perturbation theory to the limit of a hypothetical system of infinitely long chains. The first order perturbation theory yields an accurate estimation of chi to first order in alpha. Going to second order, however, turns out to be more involved and an unambiguous determination of the coefficient of alpha2 term is not possible at the moment. Lastly, we test the predictions of a renormalized one-loop theory of fluctuations using two coarse-grained models of symmetric polymer blends at the critical composition. It is found that the theory accurately describes the correlation effect for relatively small values of chiN. In addition, the universality assumption of coarse-grained models is examined and we find results that are supportive of it.
Application of signal detection theory to optics. [image evaluation and restoration
NASA Technical Reports Server (NTRS)
Helstrom, C. W.
1973-01-01
Basic quantum detection and estimation theory, applications to optics, photon counting, and filtering theory are studied. Recent work on the restoration of degraded optical images received at photoelectrically emissive surfaces is also reported, the data used by the method are the numbers of electrons ejected from various parts of the surface.
The Integration of Psycholinguistic and Discourse Processing Theories of Reading Comprehension.
ERIC Educational Resources Information Center
Beebe, Mona J.
To assess the compatibility of miscue analysis and recall analysis as independent elements in a theory of reading comprehension, a study was performed that operationalized each theory and separated its components into measurable units to allow empirical testing. A cueing strategy model was estimated, but the discourse processing model was broken…
Generalized continued fractions and ergodic theory
NASA Astrophysics Data System (ADS)
Pustyl'nikov, L. D.
2003-02-01
In this paper a new theory of generalized continued fractions is constructed and applied to numbers, multidimensional vectors belonging to a real space, and infinite-dimensional vectors with integral coordinates. The theory is based on a concept generalizing the procedure for constructing the classical continued fractions and substantially using ergodic theory. One of the versions of the theory is related to differential equations. In the finite-dimensional case the constructions thus introduced are used to solve problems posed by Weyl in analysis and number theory concerning estimates of trigonometric sums and of the remainder in the distribution law for the fractional parts of the values of a polynomial, and also the problem of characterizing algebraic and transcendental numbers with the use of generalized continued fractions. Infinite-dimensional generalized continued fractions are applied to estimate sums of Legendre symbols and to obtain new results in the classical problem of the distribution of quadratic residues and non-residues modulo a prime. In the course of constructing these continued fractions, an investigation is carried out of the ergodic properties of a class of infinite-dimensional dynamical systems which are also of independent interest.
Probability theory, not the very guide of life.
Juslin, Peter; Nilsson, Håkan; Winman, Anders
2009-10-01
Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.
Dual-process theory and signal-detection theory of recognition memory.
Wixted, John T
2007-01-01
Two influential models of recognition memory, the unequal-variance signal-detection model and a dual-process threshold/detection model, accurately describe the receiver operating characteristic, but only the latter model can provide estimates of recollection and familiarity. Such estimates often accord with those provided by the remember-know procedure, and both methods are now widely used in the neuroscience literature to identify the brain correlates of recollection and familiarity. However, in recent years, a substantial literature has accumulated directly contrasting the signal-detection model against the threshold/detection model, and that literature is almost unanimous in its endorsement of signal-detection theory. A dual-process version of signal-detection theory implies that individual recognition decisions are not process pure, and it suggests new ways to investigate the brain correlates of recognition memory. ((c) 2007 APA, all rights reserved).
Naive Probability: Model-Based Estimates of Unique Events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N
2015-08-01
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. © 2014 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Takagi, Hideo D.; Swaddle, Thomas W.
1996-01-01
The outer-sphere contribution to the volume of activation of homogeneous electron exchange reactions is estimated for selected solvents on the basis of the mean spherical approximation (MSA), and the calculated values are compared with those estimated by the Strank-Hush-Marcus (SHM) theory and with activation volumes obtained experimentally for the electron exchange reaction between tris(hexafluoroacetylacetonato)ruthenium(III) and -(II) in acetone, acetonitrile, methanol and chloroform. The MSA treatment, which recognizes the molecular nature of the solvent, does not improve significantly upon the continuous-dielectric SHM theory, which represents the experimental data adequately for the more polar solvents.
Coello Pérez, Eduardo A.; Papenbrock, Thomas F.
2015-07-27
In this paper, we present a model-independent approach to electric quadrupole transitions of deformed nuclei. Based on an effective theory for axially symmetric systems, the leading interactions with electromagnetic fields enter as minimal couplings to gauge potentials, while subleading corrections employ gauge-invariant nonminimal couplings. This approach yields transition operators that are consistent with the Hamiltonian, and the power counting of the effective theory provides us with theoretical uncertainty estimates. We successfully test the effective theory in homonuclear molecules that exhibit a large separation of scales. For ground-state band transitions of rotational nuclei, the effective theory describes data well within theoreticalmore » uncertainties at leading order. To probe the theory at subleading order, data with higher precision would be valuable. For transitional nuclei, next-to-leading-order calculations and the high-precision data are consistent within the theoretical uncertainty estimates. In addition, we study the faint interband transitions within the effective theory and focus on the E2 transitions from the 0 2 + band (the “β band”) to the ground-state band. Here the predictions from the effective theory are consistent with data for several nuclei, thereby proposing a solution to a long-standing challenge.« less
Glueball spectra from a matrix model of pure Yang-Mills theory
NASA Astrophysics Data System (ADS)
Acharyya, Nirmalendu; Balachandran, A. P.; Pandey, Mahul; Sanyal, Sambuddha; Vaidya, Sachindeo
2018-05-01
We present variational estimates for the low-lying energies of a simple matrix model that approximates SU(3) Yang-Mills theory on a three-sphere of radius R. By fixing the ground state energy, we obtain the (integrated) renormalization group (RG) equation for the Yang-Mills coupling g as a function of R. This RG equation allows to estimate the mass of other glueball states, which we find to be in excellent agreement with lattice simulations.
Flight Mechanics/Estimation Theory Symposium, 1992
NASA Technical Reports Server (NTRS)
Stengle, Thomas H. (Editor)
1993-01-01
This conference publication includes 40 papers and abstracts presented at the Flight Mechanics/Estimation Theory Symposium on May 5-7, 1992. Sponsored by the Flight Dynamics Division of Goddard Space Flight Center, this symposium featured technical papers on a wide range of issues related to orbit-attitude prediction, determination, and control; attitude sensor calibration; attitude determination error analysis; attitude dynamics; and orbit decay and maneuver strategy. Government, industry, and the academic community participated in the preparation and presentation of these papers.
Flight Mechanics/Estimation Theory Symposium 1996
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor)
1996-01-01
This conference publication includes 34 papers and abstracts presented at the Flight Mechanics/ Estimation Theory Symposium on May 14-16, 1996. Sponsored by the Flight Dynamics Division of Goddard Space Flight Center, this symposium featured technical papers on a wide range of issues related to orbit-attitude prediction, determination, and control; attitude sensor calibration; attitude determination error analysis; attitude dynamics; and orbit decay and maneuver strategy. Government, industry, and the academic community participated in the preparation and presentation of these papers.
Flight Mechanics/Estimation Theory Symposium, 1994
NASA Technical Reports Server (NTRS)
Hartman, Kathy R. (Editor)
1994-01-01
This conference publication includes 41 papers and abstracts presented at the Flight Mechanics/Estimation Theory Symposium on May 17-19, 1994. Sponsored by the Flight Dynamics Division of Goddard Space Flight Center, this symposium featured technical papers on a wide range of issues related to orbit-attitude prediction, determination and control; attitude sensor calibration; attitude determination error analysis; attitude dynamics; and orbit decay and maneuver strategy. Government, industry, and the academic community participated in the preparation and presentation of these papers.
Flight Mechanics/Estimation Theory Symposium, 1990
NASA Technical Reports Server (NTRS)
Stengle, Thomas (Editor)
1990-01-01
This conference publication includes 32 papers and abstracts presented at the Flight Mechanics/Estimation Theory Symposium on May 22-25, 1990. Sponsored by the Flight Dynamics Division of Goddard Space Flight Center, this symposium features technical papers on a wide range of issues related to orbit-attitude prediction, determination and control; attitude sensor calibration; attitude determination error analysis; attitude dynamics; and orbit decay and maneuver strategy. Government, industry, and the academic community participated in the preparation and presentation of these papers.
Flight Mechanics/Estimation Theory Symposium 1995
NASA Technical Reports Server (NTRS)
Hartman, Kathy R. (Editor)
1995-01-01
This conference publication includes 41 papers and abstracts presented at the Flight Mechanics/ Estimation Theory Symposium on May 16-18, 1995. Sponsored by the Flight Dynamics Division of Goddard Space Flight Center, this symposium featured technical papers on a wide range of issues related to orbit-attitude prediction, determination, and control; attitude sensor calibration; attitude determination error analysis; attitude dynamics; and orbit decay and maneuver strategy. Government, industry, and the academic community participated in the preparation and presentation of these papers.
Estimating Hydraulic Parameters When Poroelastic Effects Are Significant
Berg, S.J.; Hsieh, P.A.; Illman, W.A.
2011-01-01
For almost 80 years, deformation-induced head changes caused by poroelastic effects have been observed during pumping tests in multilayered aquifer-aquitard systems. As water in the aquifer is released from compressive storage during pumping, the aquifer is deformed both in the horizontal and vertical directions. This deformation in the pumped aquifer causes deformation in the adjacent layers, resulting in changes in pore pressure that may produce drawdown curves that differ significantly from those predicted by traditional groundwater theory. Although these deformation-induced head changes have been analyzed in several studies by poroelasticity theory, there are at present no practical guidelines for the interpretation of pumping test data influenced by these effects. To investigate the impact that poroelastic effects during pumping tests have on the estimation of hydraulic parameters, we generate synthetic data for three different aquifer-aquitard settings using a poroelasticity model, and then analyze the synthetic data using type curves and parameter estimation techniques, both of which are based on traditional groundwater theory and do not account for poroelastic effects. Results show that even when poroelastic effects result in significant deformation-induced head changes, it is possible to obtain reasonable estimates of hydraulic parameters using methods based on traditional groundwater theory, as long as pumping is sufficiently long so that deformation-induced effects have largely dissipated. ?? 2011 The Author(s). Journal compilation ?? 2011 National Ground Water Association.
Theory of Partitioning of Disease Prevalence and Mortality in Observational Data
Akushevich, I.; Yashkin, A.; Kravchenko, J.; Fang, F.; Arbeev, K.; Sloan, F.; Yashin, AI
2017-01-01
In this study, we present a new theory of partitioning of disease prevalence and incidence-based mortality and demonstrate how this theory practically works for analyses of Medicare data. In the theory, the prevalence of a disease and incidence-based mortality are modeled in terms of disease incidence and survival after diagnosis supplemented by information on disease prevalence at the initial age and year available in a dataset. Partitioning of the trends of prevalence and mortality is calculated with minimal assumptions. The resulting expressions for the components of the trends are given by continuous functions of data. The estimator is consistent and stable. The developed methodology is applied for data on type 2 diabetes using individual records from a nationally representative 5% sample of Medicare beneficiaries age 65+. Numerical estimates show excellent concordance between empirical estimates and theoretical predictions. Evaluated partitioning model showed that both prevalence and mortality increase with time. The primary driving factors of the observed prevalence increase are improved survival and increased prevalence at age 65. The increase in diabetes-related mortality is driven by increased prevalence and unobserved trends in time-periods and age-groups outside of the range of the data used in the study. Finally, the properties of the new estimator, possible statistical and systematical uncertainties, and future practical applications of this methodology in epidemiology, demography, public health and health forecasting are discussed. PMID:28130147
Estimating the time evolution of NMR systems via a quantum-speed-limit-like expression
NASA Astrophysics Data System (ADS)
Villamizar, D. V.; Duzzioni, E. I.; Leal, A. C. S.; Auccaise, R.
2018-05-01
Finding the solutions of the equations that describe the dynamics of a given physical system is crucial in order to obtain important information about its evolution. However, by using estimation theory, it is possible to obtain, under certain limitations, some information on its dynamics. The quantum-speed-limit (QSL) theory was originally used to estimate the shortest time in which a Hamiltonian drives an initial state to a final one for a given fidelity. Using the QSL theory in a slightly different way, we are able to estimate the running time of a given quantum process. For that purpose, we impose the saturation of the Anandan-Aharonov bound in a rotating frame of reference where the state of the system travels slower than in the original frame (laboratory frame). Through this procedure it is possible to estimate the actual evolution time in the laboratory frame of reference with good accuracy when compared to previous methods. Our method is tested successfully to predict the time spent in the evolution of nuclear spins 1/2 and 3/2 in NMR systems. We find that the estimated time according to our method is better than previous approaches by up to four orders of magnitude. One disadvantage of our method is that we need to solve a number of transcendental equations, which increases with the system dimension and parameter discretization used to solve such equations numerically.
Theory of nanobubble formation and induced force in nanochannels
NASA Astrophysics Data System (ADS)
Arai, Noriyoshi; Koishi, Takahiro; Ebisuzaki, Toshikazu
2017-10-01
This paper presents a fundamental theory of nanobubble formation and induced force in confined nanochannels. It is shown that nanobubble formation between hydrophobic plates can be predicted from their surface tension and geometry, with estimated values for the surface free energy and the force acting on the plates in good agreement with the results of molecular dynamics simulation and experimentation. When a bubble is formed between two plates, vertical attractive force and horizontal retract force due to the shifted plates are applied to the plates. The net force exerted on the plates is not dependent on the distance between them. The short-range force between hydrophobic surfaces due to hydrophobic interaction appears to correspond to the force estimated by our theory. We compared between experimental and theoretical values for the binding energy of a molecular motor system to validate our theory. The tendency that the binding energy increases as the size of the protein increases is consistent with the theory.
Statistical Indexes for Monitoring Item Behavior under Computer Adaptive Testing Environment.
ERIC Educational Resources Information Center
Zhu, Renbang; Yu, Feng; Liu, Su
A computerized adaptive test (CAT) administration usually requires a large supply of items with accurately estimated psychometric properties, such as item response theory (IRT) parameter estimates, to ensure the precision of examinee ability estimation. However, an estimated IRT model of a given item in any given pool does not always correctly…
ERIC Educational Resources Information Center
Zhang, Jinming; Lu, Ting
2007-01-01
In practical applications of item response theory (IRT), item parameters are usually estimated first from a calibration sample. After treating these estimates as fixed and known, ability parameters are then estimated. However, the statistical inferences based on the estimated abilities can be misleading if the uncertainty of the item parameter…
An Evaluation of Empirical Bayes's Estimation of Value-Added Teacher Performance Measures
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul N.; Wooldridge, Jeffrey M.
2015-01-01
Empirical Bayes's (EB) estimation has become a popular procedure used to calculate teacher value added, often as a way to make imprecise estimates more reliable. In this article, we review the theory of EB estimation and use simulated and real student achievement data to study the ability of EB estimators to properly rank teachers. We compare the…
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
Estimating pore-space gas hydrate saturations from well log acoustic data
NASA Astrophysics Data System (ADS)
Lee, Myung W.; Waite, William F.
2008-07-01
Relating pore-space gas hydrate saturation to sonic velocity data is important for remotely estimating gas hydrate concentration in sediment. In the present study, sonic velocities of gas hydrate-bearing sands are modeled using a three-phase Biot-type theory in which sand, gas hydrate, and pore fluid form three homogeneous, interwoven frameworks. This theory is developed using well log compressional and shear wave velocity data from the Mallik 5L-38 permafrost gas hydrate research well in Canada and applied to well log data from hydrate-bearing sands in the Alaskan permafrost, Gulf of Mexico, and northern Cascadia margin. Velocity-based gas hydrate saturation estimates are in good agreement with Nuclear Magneto Resonance and resistivity log estimates over the complete range of observed gas hydrate saturations.
Estimating pore-space gas hydrate saturations from well log acoustic data
Lee, Myung W.; Waite, William F.
2008-01-01
Relating pore-space gas hydrate saturation to sonic velocity data is important for remotely estimating gas hydrate concentration in sediment. In the present study, sonic velocities of gas hydrate–bearing sands are modeled using a three-phase Biot-type theory in which sand, gas hydrate, and pore fluid form three homogeneous, interwoven frameworks. This theory is developed using well log compressional and shear wave velocity data from the Mallik 5L-38 permafrost gas hydrate research well in Canada and applied to well log data from hydrate-bearing sands in the Alaskan permafrost, Gulf of Mexico, and northern Cascadia margin. Velocity-based gas hydrate saturation estimates are in good agreement with Nuclear Magneto Resonance and resistivity log estimates over the complete range of observed gas hydrate saturations.
Electrostatic Estimation of Intercalant Jump-Diffusion Barriers Using Finite-Size Ion Models.
Zimmermann, Nils E R; Hannah, Daniel C; Rong, Ziqin; Liu, Miao; Ceder, Gerbrand; Haranczyk, Maciej; Persson, Kristin A
2018-02-01
We report on a scheme for estimating intercalant jump-diffusion barriers that are typically obtained from demanding density functional theory-nudged elastic band calculations. The key idea is to relax a chain of states in the field of the electrostatic potential that is averaged over a spherical volume using different finite-size ion models. For magnesium migrating in typical intercalation materials such as transition-metal oxides, we find that the optimal model is a relatively large shell. This data-driven result parallels typical assumptions made in models based on Onsager's reaction field theory to quantitatively estimate electrostatic solvent effects. Because of its efficiency, our potential of electrostatics-finite ion size (PfEFIS) barrier estimation scheme will enable rapid identification of materials with good ionic mobility.
Time-dependence of graph theory metrics in functional connectivity analysis
Chiang, Sharon; Cassese, Alberto; Guindani, Michele; Vannucci, Marina; Yeh, Hsiang J.; Haneef, Zulfi; Stern, John M.
2016-01-01
Brain graphs provide a useful way to computationally model the network structure of the connectome, and this has led to increasing interest in the use of graph theory to quantitate and investigate the topological characteristics of the healthy brain and brain disorders on the network level. The majority of graph theory investigations of functional connectivity have relied on the assumption of temporal stationarity. However, recent evidence increasingly suggests that functional connectivity fluctuates over the length of the scan. In this study, we investigate the stationarity of brain network topology using a Bayesian hidden Markov model (HMM) approach that estimates the dynamic structure of graph theoretical measures of whole-brain functional connectivity. In addition to extracting the stationary distribution and transition probabilities of commonly employed graph theory measures, we propose two estimators of temporal stationarity: the S-index and N-index. These indexes can be used to quantify different aspects of the temporal stationarity of graph theory measures. We apply the method and proposed estimators to resting-state functional MRI data from healthy controls and patients with temporal lobe epilepsy. Our analysis shows that several graph theory measures, including small-world index, global integration measures, and betweenness centrality, may exhibit greater stationarity over time and therefore be more robust. Additionally, we demonstrate that accounting for subject-level differences in the level of temporal stationarity of network topology may increase discriminatory power in discriminating between disease states. Our results confirm and extend findings from other studies regarding the dynamic nature of functional connectivity, and suggest that using statistical models which explicitly account for the dynamic nature of functional connectivity in graph theory analyses may improve the sensitivity of investigations and consistency across investigations. PMID:26518632
Time-dependence of graph theory metrics in functional connectivity analysis.
Chiang, Sharon; Cassese, Alberto; Guindani, Michele; Vannucci, Marina; Yeh, Hsiang J; Haneef, Zulfi; Stern, John M
2016-01-15
Brain graphs provide a useful way to computationally model the network structure of the connectome, and this has led to increasing interest in the use of graph theory to quantitate and investigate the topological characteristics of the healthy brain and brain disorders on the network level. The majority of graph theory investigations of functional connectivity have relied on the assumption of temporal stationarity. However, recent evidence increasingly suggests that functional connectivity fluctuates over the length of the scan. In this study, we investigate the stationarity of brain network topology using a Bayesian hidden Markov model (HMM) approach that estimates the dynamic structure of graph theoretical measures of whole-brain functional connectivity. In addition to extracting the stationary distribution and transition probabilities of commonly employed graph theory measures, we propose two estimators of temporal stationarity: the S-index and N-index. These indexes can be used to quantify different aspects of the temporal stationarity of graph theory measures. We apply the method and proposed estimators to resting-state functional MRI data from healthy controls and patients with temporal lobe epilepsy. Our analysis shows that several graph theory measures, including small-world index, global integration measures, and betweenness centrality, may exhibit greater stationarity over time and therefore be more robust. Additionally, we demonstrate that accounting for subject-level differences in the level of temporal stationarity of network topology may increase discriminatory power in discriminating between disease states. Our results confirm and extend findings from other studies regarding the dynamic nature of functional connectivity, and suggest that using statistical models which explicitly account for the dynamic nature of functional connectivity in graph theory analyses may improve the sensitivity of investigations and consistency across investigations. Copyright © 2015 Elsevier Inc. All rights reserved.
The Efficacy of the Theory of Reasoned Action to Explain Gambling Behavior in College Students
ERIC Educational Resources Information Center
Thrasher, Robert G.; Andrew, Damon P. S.; Mahony, Daniel F.
2007-01-01
Shaffer and Hall (1997) have estimated college student gambling to be three times as high as their adult counterparts. Despite a considerable amount of research on gambling, researchers have struggled to develop a universal theory that explains gambling behavior. This study explored the potential of Ajzen and Fishbein's (1980) Theory of Reasoned…
The Supply and Demand for College Educated Labor.
ERIC Educational Resources Information Center
Nollen, Stanley D.
In this study a model for the supply of college educated labor is developed from human capital theory. A demand model is added, derived from neoclassical production function theory. Empirical estimates are made for white males and white females, using cross-sectional data on states of the U.S., 1960-70. In human capital theory, education is an…
Examination of Different Item Response Theory Models on Tests Composed of Testlets
ERIC Educational Resources Information Center
Kogar, Esin Yilmaz; Kelecioglu, Hülya
2017-01-01
The purpose of this research is to first estimate the item and ability parameters and the standard error values related to those parameters obtained from Unidimensional Item Response Theory (UIRT), bifactor (BIF) and Testlet Response Theory models (TRT) in the tests including testlets, when the number of testlets, number of independent items, and…
A Survey of Methods for Computing Best Estimates of Endoatmospheric and Exoatmospheric Trajectories
NASA Technical Reports Server (NTRS)
Bernard, William P.
2018-01-01
Beginning with the mathematical prediction of planetary orbits in the early seventeenth century up through the most recent developments in sensor fusion methods, many techniques have emerged that can be employed on the problem of endo and exoatmospheric trajectory estimation. Although early methods were ad hoc, the twentieth century saw the emergence of many systematic approaches to estimation theory that produced a wealth of useful techniques. The broad genesis of estimation theory has resulted in an equally broad array of mathematical principles, methods and vocabulary. Among the fundamental ideas and methods that are briefly touched on are batch and sequential processing, smoothing, estimation, and prediction, sensor fusion, sensor fusion architectures, data association, Bayesian and non Bayesian filtering, the family of Kalman filters, models of the dynamics of the phases of a rocket's flight, and asynchronous, delayed, and asequent data. Along the way, a few trajectory estimation issues are addressed and much of the vocabulary is defined.
Extending birthday paradox theory to estimate the number of tags in RFID systems.
Shakiba, Masoud; Singh, Mandeep Jit; Sundararajan, Elankovan; Zavvari, Azam; Islam, Mohammad Tariqul
2014-01-01
The main objective of Radio Frequency Identification systems is to provide fast identification for tagged objects. However, there is always a chance of collision, when tags transmit their data to the reader simultaneously. Collision is a time-consuming event that reduces the performance of RFID systems. Consequently, several anti-collision algorithms have been proposed in the literature. Dynamic Framed Slotted ALOHA (DFSA) is one of the most popular of these algorithms. DFSA dynamically modifies the frame size based on the number of tags. Since the real number of tags is unknown, it needs to be estimated. Therefore, an accurate tag estimation method has an important role in increasing the efficiency and overall performance of the tag identification process. In this paper, we propose a novel estimation technique for DFSA anti-collision algorithms that applies birthday paradox theory to estimate the number of tags accurately. The analytical discussion and simulation results prove that the proposed method increases the accuracy of tag estimation and, consequently, outperforms previous schemes.
Extending Birthday Paradox Theory to Estimate the Number of Tags in RFID Systems
Shakiba, Masoud; Singh, Mandeep Jit; Sundararajan, Elankovan; Zavvari, Azam; Islam, Mohammad Tariqul
2014-01-01
The main objective of Radio Frequency Identification systems is to provide fast identification for tagged objects. However, there is always a chance of collision, when tags transmit their data to the reader simultaneously. Collision is a time-consuming event that reduces the performance of RFID systems. Consequently, several anti-collision algorithms have been proposed in the literature. Dynamic Framed Slotted ALOHA (DFSA) is one of the most popular of these algorithms. DFSA dynamically modifies the frame size based on the number of tags. Since the real number of tags is unknown, it needs to be estimated. Therefore, an accurate tag estimation method has an important role in increasing the efficiency and overall performance of the tag identification process. In this paper, we propose a novel estimation technique for DFSA anti-collision algorithms that applies birthday paradox theory to estimate the number of tags accurately. The analytical discussion and simulation results prove that the proposed method increases the accuracy of tag estimation and, consequently, outperforms previous schemes. PMID:24752285
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dey, Rik, E-mail: rikdey@utexas.edu; Pramanik, Tanmoy; Roy, Anupam
We have studied angle dependent magnetoresistance of Bi{sub 2}Te{sub 3} thin film with field up to 9 T over 2–20 K temperatures. The perpendicular field magnetoresistance has been explained by the Hikami-Larkin-Nagaoka theory alone in a system with strong spin-orbit coupling, from which we have estimated the mean free path, the phase coherence length, and the spin-orbit relaxation time. We have obtained the out-of-plane spin-orbit relaxation time to be small and the in-plane spin-orbit relaxation time to be comparable to the momentum relaxation time. The estimation of these charge and spin transport parameters are useful for spintronics applications. For parallel field magnetoresistance,more » we have confirmed the presence of Zeeman effect which is otherwise suppressed in perpendicular field magnetoresistance due to strong spin-orbit coupling. The parallel field data have been explained using both the contributions from the Maekawa-Fukuyama localization theory for non-interacting electrons and Lee-Ramakrishnan theory of electron-electron interactions. The estimated Zeeman g-factor and the strength of Coulomb screening parameter agree well with the theory. Finally, the anisotropy in magnetoresistance with respect to angle has been described by the Hikami-Larkin-Nagaoka theory. This anisotropy can be used in anisotropic magnetic sensor applications.« less
Klijs, Bart; Mackenbach, Johan P; Kunst, Anton E
2011-04-01
Projections of future trends in the burden of disability could be guided by models linking disability to life expectancy, such as the dynamic equilibrium theory. This article tests the key assumption of this theory that severe disability is associated with proximity to death, whereas mild disability is not. Using data from the GLOBE study (Gezondheid en Levensomstandigheden Bevolking Eindhoven en omstreken), the association of three levels of self-reported disabilities in activities of daily living with age and proximity to death was studied using logistic regression models. Regression estimates were used to estimate the number of life years with disability for life spans of 75 and 85 years. Odds ratios of 0.976 (not significant) for mild disability, 1.137 for moderate disability, and 1.231 for severe disability showed a stronger effect of proximity to death for more severe levels of disability. A 10-year increase of life span was estimated to result in a substantial expansion of mild disability (4.6 years) compared with a small expansion of moderate (0.7 years) and severe (0.9 years) disability. These findings support the theory of a dynamic equilibrium. Projections of the future burden of disability could be substantially improved by connecting to this theory and incorporating information on proximity to death. Copyright © 2011 Elsevier Inc. All rights reserved.
Flight Mechanics/Estimation Theory Symposium 1988
NASA Technical Reports Server (NTRS)
Stengle, Thomas (Editor)
1988-01-01
This conference publication includes 28 papers and abstracts presented at the Flight Mechanics/Estimation Theory Symposium on May 10 to 11, 1988. Sponsored by the Flight Dynamics Division of Goddard Space Flight Center, this symposium features technical papers on a wide range of issue related to orbit-attitude prediction, determination and control; attitude sensor calibration; attitude determination error analysis; attitude dynamics; and orbit decay and maneuver strategy. Government, industry, and the academic community participated in the preparation and presentation of these papers.
1994-02-15
0. Faugeras. Three dimensional vision, a geometric viewpoint. MIT Press, 1993. [19] 0 . D. Faugeras and S. Maybank . Motion from point mathces...multiplicity of solutions. Int. J. of Computer Vision, 1990. 1201 0.D. Faugeras, Q.T. Luong, and S.J. Maybank . Camera self-calibration: theory and...Kalrnan filter-based algorithms for estimating depth from image sequences. Int. J. of computer vision, 1989. [41] S. Maybank . Theory of
ERIC Educational Resources Information Center
Kim, Sooyeon; Moses, Tim
2016-01-01
The purpose of this study is to evaluate the extent to which item response theory (IRT) proficiency estimation methods are robust to the presence of aberrant responses under the "GRE"® General Test multistage adaptive testing (MST) design. To that end, a wide range of atypical response behaviors affecting as much as 10% of the test items…
Designing Estimator/Predictor Digital Phase-Locked Loops
NASA Technical Reports Server (NTRS)
Statman, J. I.; Hurd, W. J.
1988-01-01
Signal delays in equipment compensated automatically. New approach to design of digital phase-locked loop (DPLL) incorporates concepts from estimation theory and involves decomposition of closed-loop transfer function into estimator and predictor. Estimator provides recursive estimates of phase, frequency, and higher order derivatives of phase with respect to time, while predictor compensates for delay, called "transport lag," caused by PLL equipment and by DPLL computations.
An estimator-predictor approach to PLL loop filter design
NASA Technical Reports Server (NTRS)
Statman, Joseph I.; Hurd, William J.
1990-01-01
The design of digital phase locked loops (DPLL) using estimation theory concepts in the selection of a loop filter is presented. The key concept, that the DPLL closed-loop transfer function is decomposed into an estimator and a predictor, is discussed. The estimator provides recursive estimates of phase, frequency, and higher-order derivatives, and the predictor compensates for the transport lag inherent in the loop.
Nonlinear Statistical Estimation with Numerical Maximum Likelihood
1974-10-01
probably most directly attributable to the speed, precision and compactness of the linear programming algorithm exercised ; the mutual primal-dual...discriminant analysis is to classify the individual as a member of T# or IT, 1 2 according to the relative...Introduction to the Dissertation 1 Introduction to Statistical Estimation Theory 3 Choice of Estimator.. .Density Functions 12 Choice of Estimator
A Short Note on Estimating the Testlet Model with Different Estimators in Mplus
ERIC Educational Resources Information Center
Luo, Yong
2018-01-01
Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and…
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Using IRT Trait Estimates versus Summated Scores in Predicting Outcomes
ERIC Educational Resources Information Center
Xu, Ting; Stone, Clement A.
2012-01-01
It has been argued that item response theory trait estimates should be used in analyses rather than number right (NR) or summated scale (SS) scores. Thissen and Orlando postulated that IRT scaling tends to produce trait estimates that are linearly related to the underlying trait being measured. Therefore, IRT trait estimates can be more useful…
John B. Loomis; George Peterson; Patricia A. Champ; Thomas C. Brown; Beatrice Lucero
1998-01-01
Estimating empirical measures of an individual's willingness to accept that are consistent with conventional economic theory, has proven difficult. The method of paired comparison offers a promising approach to estimate willingness to accept. This method involves having individuals make binary choices between receiving a particular good or a sum of money....
ERIC Educational Resources Information Center
Lee, Soo; Suh, Youngsuk
2018-01-01
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Decoherence estimation in quantum theory and beyond
NASA Astrophysics Data System (ADS)
Pfister, Corsin
The quantum physics literature provides many different characterizations of decoherence. Most of them have in common that they describe decoherence as a kind of influence on a quantum system upon interacting with an another system. In the spirit of quantum information theory, we adapt a particular viewpoint on decoherence which describes it as the loss of information into a system that is possibly controlled by an adversary. We use a quantitative framework for decoherence that builds on operational characterizations of the min-entropy that have been developed in the quantum information literature. It characterizes decoherence as an influence on quantum channels that reduces their suitability for a variety of quantifiable tasks such as the distribution of secret cryptographic keys of a certain length or the distribution of a certain number of maximally entangled qubit pairs. This allows for a quantitative and operational characterization of decoherence via operational characterizations of the min-entropy. In this thesis, we present a series of results about the estimation of the minentropy, subdivided into three parts. The first part concerns the estimation of a quantum adversary's uncertainty about classical information--expressed by the smooth min-entropy--as it is done in protocols for quantum key distribution (QKD). We analyze this form of min-entropy estimation in detail and find that some of the more recently suggested QKD protocols have previously unnoticed security loopholes. We show that the specifics of the sifting subroutine of a QKD protocol are crucial for security by pointing out mistakes in the security analysis in the literature and by presenting eavesdropping attacks on those problematic protocols. We provide solutions to the identified problems and present a formalized analysis of the min-entropy estimate that incorporates the sifting stage of QKD protocols. In the second part, we extend ideas from QKD to a protocol that allows to estimate an adversary's uncertainty about quantum information, expressed by the fully quantum smooth min-entropy. Roughly speaking, we show that a protocol that resembles the parallel execution of two QKD protocols can be used to lower bound the min-entropy of some unmeasured qubits. We explain how this result may influence the ongoing search for protocols for entanglement distribution. The third part is dedicated to the development of a framework that allows the estimation of decoherence even in experiments that cannot be correctly described by quantum theory. Inspired by an equivalent formulation of the min-entropy that relates it to the fidelity with a maximally entangled state, we define a decoherence quantity for a very general class of probabilistic theories that reduces to the min-entropy in the special case of quantum theory. This entails a definition of maximal entanglement for generalized probabilistic theories. Using techniques from semidefinite and linear programming, we show how bounds on this quantity can be estimated through Bell-type experiments. This allows to test models for decoherence that cannot be described by quantum theory. As an example application, we devise an experimental test of a model for gravitational decoherence that has been suggested in the literature.
Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data
CHEN, SHUAI; ZHAO, HONGWEI
2013-01-01
Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869
Fetterly, Kenneth A; Favazza, Christopher P
2016-08-07
Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ([Formula: see text]) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame(-1) resulted in [Formula: see text] estimates which were as much as 2.9× greater than expected of a quantum limited system. Over-estimation of [Formula: see text] was presumed to be a result of bias error due to temporally variable non-stationary noise. Statistical theory which allows for independent contributions of 'signal' from a test object (o) and temporally variable non-stationary noise (ns) was developed. The theory demonstrates that the biased [Formula: see text] is the sum of the detectability indices associated with the test object [Formula: see text] and non-stationary noise ([Formula: see text]). Given the nature of the imaging system and the experimental methods, [Formula: see text] cannot be directly determined independent of [Formula: see text]. However, methods to estimate [Formula: see text] independent of [Formula: see text] were developed. In accordance with the theory, [Formula: see text] was subtracted from experimental estimates of [Formula: see text], providing an unbiased estimate of [Formula: see text]. Estimates of [Formula: see text] exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide [Formula: see text] estimates which are accurate and precise for [Formula: see text]. Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the presence of bias in Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.
Equal Area Logistic Estimation for Item Response Theory
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li
2009-08-01
Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.
Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets
NASA Astrophysics Data System (ADS)
Cifter, Atilla
2011-06-01
This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.
Turbulent Reconnection Rates from Cluster Observations in the Magnetosheath
NASA Technical Reports Server (NTRS)
Wendel, Deirdre
2011-01-01
The role of turbulence in producing fast reconnection rates is an important unresolved question. Scant in situ analyses exist. We apply multiple spacecraft techniques to a case of nonlinear turbulent reconnection in the magnetosheath to test various theoretical results for turbulent reconnection rates. To date, in situ estimates of the contribution of turbulence to reconnection rates have been calculated from an effective electric field derived through linear wave theory. However, estimates of reconnection rates based on fully nonlinear turbulence theories and simulations exist that are amenable to multiple spacecraft analyses. Here we present the linear and nonlinear theories and apply some of the nonlinear rates to Cluster observations of reconnecting, turbulent current sheets in the magnetosheath. We compare the results to the net reconnection rate found from the inflow speed. Ultimately, we intend to test and compare linear and nonlinear estimates of the turbulent contribution to reconnection rates and to measure the relative contributions of turbulence and the Hall effect.
Turbulent Reconnection Rates from Cluster Observations in the Magneto sheath
NASA Technical Reports Server (NTRS)
Wendel, Deirdre
2011-01-01
The role of turbulence in producing fast reconnection rates is an important unresolved question. Scant in situ analyses exist. We apply multiple spacecraft techniques to a case of nonlinear turbulent reconnection in the magnetosheath to test various theoretical results for turbulent reconnection rates. To date, in situ estimates of the contribution of turbulence to reconnection rates have been calculated from an effective electric field derived through linear wave theory. However, estimates of reconnection rates based on fully nonlinear turbulence theories and simulations exist that are amenable to multiple spacecraft analyses. Here we present the linear and nonlinear theories and apply some of the nonlinear rates to Cluster observations of reconnecting, turbulent current sheets in the magnetos heath. We compare the results to the net reconnection rate found from the inflow speed. Ultimately, we intend to test and compare linear and nonlinear estimates of the turbulent contribution to reconnection rates and to measure the relative contributions of turbulence and the Hall effect.
Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions
Burke, Timothy P.; Kiedrowski, Brian C.
2017-12-11
Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less
Script-theory virtual case: A novel tool for education and research.
Hayward, Jake; Cheung, Amandy; Velji, Alkarim; Altarejos, Jenny; Gill, Peter; Scarfe, Andrew; Lewis, Melanie
2016-11-01
Context/Setting: The script theory of diagnostic reasoning proposes that clinicians evaluate cases in the context of an "illness script," iteratively testing internal hypotheses against new information eventually reaching a diagnosis. We present a novel tool for teaching diagnostic reasoning to undergraduate medical students based on an adaptation of script theory. We developed a virtual patient case that used clinically authentic audio and video, interactive three-dimensional (3D) body images, and a simulated electronic medical record. Next, we used interactive slide bars to record respondents' likelihood estimates of diagnostic possibilities at various stages of the case. Responses were dynamically compared to data from expert clinicians and peers. Comparative frequency distributions were presented to the learner and final diagnostic likelihood estimates were analyzed. Detailed student feedback was collected. Over two academic years, 322 students participated. Student diagnostic likelihood estimates were similar year to year, but were consistently different from expert clinician estimates. Student feedback was overwhelmingly positive: students found the case was novel, innovative, clinically authentic, and a valuable learning experience. We demonstrate the successful implementation of a novel approach to teaching diagnostic reasoning. Future study may delineate reasoning processes associated with differences between novice and expert responses.
A random walk rule for phase I clinical trials.
Durham, S D; Flournoy, N; Rosenberger, W F
1997-06-01
We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.
Study on paddy rice yield estimation based on multisource data and the Grey system theory
NASA Astrophysics Data System (ADS)
Deng, Wensheng; Wang, Wei; Liu, Hai; Li, Chen; Ge, Yimin; Zheng, Xianghua
2009-10-01
The paddy rice is our important crops. In study of the paddy rice yield estimation, compared with the scholars who usually only take the remote sensing data or meteorology as the influence factors, we combine the remote sensing and the meteorological data to make the monitoring result closer reality. Although the gray system theory has used in many aspects, it is applied very little in paddy rice yield estimation. This study introduces it to the paddy rice yield estimation, and makes the yield estimation model. This can resolve small data sets problem that can not be solved by deterministic model. It selects some regions in Jianghan plain for the study area. The data includes multi-temporal remote sensing image, meteorological and statistic data. The remote sensing data is the 16-day composite images (250-m spatial resolution) of MODIS. The meteorological data includes monthly average temperature, sunshine duration and rain fall amount. The statistical data is the long-term paddy rice yield of the study area. Firstly, it extracts the paddy rice planting area from the multi-temporal MODIS images with the help of GIS and RS. Then taking the paddy rice yield as the reference sequence, MODIS data and meteorological data as the comparative sequence, computing the gray correlative coefficient, it selects the yield estimation factor based on the grey system theory. Finally, using the factors, it establishes the yield estimation model and does the result test. The result indicated that the method is feasible and the conclusion is credible. It can provide the scientific method and reference value to carry on the region paddy rice remote sensing estimation.
Item Response Theory and Health Outcomes Measurement in the 21st Century
Hays, Ron D.; Morales, Leo S.; Reise, Steve P.
2006-01-01
Item response theory (IRT) has a number of potential advantages over classical test theory in assessing self-reported health outcomes. IRT models yield invariant item and latent trait estimates (within a linear transformation), standard errors conditional on trait level, and trait estimates anchored to item content. IRT also facilitates evaluation of differential item functioning, inclusion of items with different response formats in the same scale, and assessment of person fit and is ideally suited for implementing computer adaptive testing. Finally, IRT methods can be helpful in developing better health outcome measures and in assessing change over time. These issues are reviewed, along with a discussion of some of the methodological and practical challenges in applying IRT methods. PMID:10982088
NASA Technical Reports Server (NTRS)
Payne, David G.; Gunther, Virginia A. L.
1988-01-01
Subjects performed short term memory tasks, involving both spatial and verbal components, and a visual monitoring task involving either analog or digital display formats. These two tasks (memory vs. monitoring) were performed both singly and in conjunction. Contrary to expectations derived from multiple resource theories of attentional processes, there was no evidence that when the two tasks involved the same cognitive codes (i.e., either both spatial or both verbal/linguistics) there was more of a dual task performance decrement than when the two tasks employed different cognitive codes/processes. These results are discussed in terms of their implications for theories of attentional processes and also for research in mental state estimation.
A general theory of intertemporal decision-making and the perception of time.
Namboodiri, Vijay M K; Mihalas, Stefan; Marton, Tanya M; Hussain Shuler, Marshall G
2014-01-01
Animals and humans make decisions based on their expected outcomes. Since relevant outcomes are often delayed, perceiving delays and choosing between earlier vs. later rewards (intertemporal decision-making) is an essential component of animal behavior. The myriad observations made in experiments studying intertemporal decision-making and time perception have not yet been rationalized within a single theory. Here we present a theory-Training-Integrated Maximized Estimation of Reinforcement Rate (TIMERR)-that explains a wide variety of behavioral observations made in intertemporal decision-making and the perception of time. Our theory postulates that animals make intertemporal choices to optimize expected reward rates over a limited temporal window which includes a past integration interval-over which experienced reward rate is estimated-as well as the expected delay to future reward. Using this theory, we derive mathematical expressions for both the subjective value of a delayed reward and the subjective representation of the delay. A unique contribution of our work is in finding that the past integration interval directly determines the steepness of temporal discounting and the non-linearity of time perception. In so doing, our theory provides a single framework to understand both intertemporal decision-making and time perception.
NASA Technical Reports Server (NTRS)
Zeng, X. C.; Stroud, D.
1989-01-01
The previously developed Ginzburg-Landau theory for calculating the crystal-melt interfacial tension of bcc elements to treat the classical one-component plasma (OCP), the charged fermion system, and the Bose crystal. For the OCP, a direct application of the theory of Shih et al. (1987) yields for the surface tension 0.0012(Z-squared e-squared/a-cubed), where Ze is the ionic charge and a is the radius of the ionic sphere. Bose crystal-melt interface is treated by a quantum extension of the classical density-functional theory, using the Feynman formalism to estimate the relevant correlation functions. The theory is applied to the metastable He-4 solid-superfluid interface at T = 0, with a resulting surface tension of 0.085 erg/sq cm, in reasonable agreement with the value extrapolated from the measured surface tension of the bcc solid in the range 1.46-1.76 K. These results suggest that the density-functional approach is a satisfactory mean-field theory for estimating the equilibrium properties of liquid-solid interfaces, given knowledge of the uniform phases.
Estimating cosmic velocity fields from density fields and tidal tensors
NASA Astrophysics Data System (ADS)
Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan
2012-10-01
In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.
Theory of partitioning of disease prevalence and mortality in observational data.
Akushevich, I; Yashkin, A P; Kravchenko, J; Fang, F; Arbeev, K; Sloan, F; Yashin, A I
2017-04-01
In this study, we present a new theory of partitioning of disease prevalence and incidence-based mortality and demonstrate how this theory practically works for analyses of Medicare data. In the theory, the prevalence of a disease and incidence-based mortality are modeled in terms of disease incidence and survival after diagnosis supplemented by information on disease prevalence at the initial age and year available in a dataset. Partitioning of the trends of prevalence and mortality is calculated with minimal assumptions. The resulting expressions for the components of the trends are given by continuous functions of data. The estimator is consistent and stable. The developed methodology is applied for data on type 2 diabetes using individual records from a nationally representative 5% sample of Medicare beneficiaries age 65+. Numerical estimates show excellent concordance between empirical estimates and theoretical predictions. Evaluated partitioning model showed that both prevalence and mortality increase with time. The primary driving factors of the observed prevalence increase are improved survival and increased prevalence at age 65. The increase in diabetes-related mortality is driven by increased prevalence and unobserved trends in time-periods and age-groups outside of the range of the data used in the study. Finally, the properties of the new estimator, possible statistical and systematical uncertainties, and future practical applications of this methodology in epidemiology, demography, public health and health forecasting are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.
Unbiased Estimates of Variance Components with Bootstrap Procedures
ERIC Educational Resources Information Center
Brennan, Robert L.
2007-01-01
This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…
ERIC Educational Resources Information Center
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Using Magnitude Estimation Scaling in Business Communication Research.
ERIC Educational Resources Information Center
Sturges, David L.
1990-01-01
Critically analyzes magnitude estimation scaling for its potential use in business communication research. Finds that the 12-15 percent increase in explained variance by magnitude estimation over categorical scaling methods may be useful in theory building but may not be sufficient to justify its added expense in applied business communication…
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul; Wooldridge, Jeffrey M.
2014-01-01
Empirical Bayes' (EB) estimation is a widely used procedure to calculate teacher value-added. It is primarily viewed as a way to make imprecise estimates more reliable. In this paper we review the theory of EB estimation and use simulated data to study its ability to properly rank teachers. We compare the performance of EB estimators with that of…
ERIC Educational Resources Information Center
Finch, Holmes; Edwards, Julianne M.
2016-01-01
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
Testing simulation and structural models with applications to energy demand
NASA Astrophysics Data System (ADS)
Wolff, Hendrik
2007-12-01
This dissertation deals with energy demand and consists of two parts. Part one proposes a unified econometric framework for modeling energy demand and examples illustrate the benefits of the technique by estimating the elasticity of substitution between energy and capital. Part two assesses the energy conservation policy of Daylight Saving Time and empirically tests the performance of electricity simulation. In particular, the chapter "Imposing Monotonicity and Curvature on Flexible Functional Forms" proposes an estimator for inference using structural models derived from economic theory. This is motivated by the fact that in many areas of economic analysis theory restricts the shape as well as other characteristics of functions used to represent economic constructs. Specific contributions are (a) to increase the computational speed and tractability of imposing regularity conditions, (b) to provide regularity preserving point estimates, (c) to avoid biases existent in previous applications, and (d) to illustrate the benefits of our approach via numerical simulation results. The chapter "Can We Close the Gap between the Empirical Model and Economic Theory" discusses the more fundamental question of whether the imposition of a particular theory to a dataset is justified. I propose a hypothesis test to examine whether the estimated empirical model is consistent with the assumed economic theory. Although the proposed methodology could be applied to a wide set of economic models, this is particularly relevant for estimating policy parameters that affect energy markets. This is demonstrated by estimating the Slutsky matrix and the elasticity of substitution between energy and capital, which are crucial parameters used in computable general equilibrium models analyzing energy demand and the impacts of environmental regulations. Using the Berndt and Wood dataset, I find that capital and energy are complements and that the data are significantly consistent with duality theory. Both results would not necessarily be achieved using standard econometric methods. The final chapter "Daylight Time and Energy" uses a quasi-experiment to evaluate a popular energy conservation policy: we challenge the conventional wisdom that extending Daylight Saving Time (DST) reduces energy demand. Using detailed panel data on half-hourly electricity consumption, prices, and weather conditions from four Australian states we employ a novel 'triple-difference' technique to test the electricity-saving hypothesis. We show that the extension failed to reduce electricity demand and instead increased electricity prices. We also apply the most sophisticated electricity simulation model available in the literature to the Australian data. We find that prior simulation models significantly overstate electricity savings. Our results suggest that extending DST will fail as an instrument to save energy resources.
NASA Astrophysics Data System (ADS)
Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.
2015-11-01
Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate the energy resource flows measurement imbalances, and to filter invalid measurements at the data acquisition and processing stage in performing monitoring of an automated energy resource monitoring and accounting system.
Angular coherence in ultrasound imaging: Theory and applications
Li, You Leo; Dahl, Jeremy J.
2017-01-01
The popularity of plane-wave transmits at multiple transmit angles for synthetic transmit aperture (or coherent compounding) has spawned a number of adaptations and new developments of ultrasonic imaging. However, the coherence properties of backscattered signals with plane-wave transmits at different angles are unknown and may impact a subset of these techniques. To provide a framework for the analysis of the coherence properties of such signals, this article introduces the angular coherence theory in medical ultrasound imaging. The theory indicates that the correlation function of such signals forms a Fourier transform pair with autocorrelation function of the receive aperture function. This conclusion can be considered as an extended form of the van Cittert Zernike theorem. The theory is validated with simulation and experimental results obtained on speckle targets. On the basis of the angular coherence of the backscattered wave, a new short-lag angular coherence beamformer is proposed and compared with an existing spatial-coherence-based beamformer. An application of the theory in phase shift estimation and speed of sound estimation is also presented. PMID:28372139
Clayson, Peter E; Miller, Gregory A
2017-01-01
Failing to consider psychometric issues related to reliability and validity, differential deficits, and statistical power potentially undermines the conclusions of a study. In research using event-related brain potentials (ERPs), numerous contextual factors (population sampled, task, data recording, analysis pipeline, etc.) can impact the reliability of ERP scores. The present review considers the contextual factors that influence ERP score reliability and the downstream effects that reliability has on statistical analyses. Given the context-dependent nature of ERPs, it is recommended that ERP score reliability be formally assessed on a study-by-study basis. Recommended guidelines for ERP studies include 1) reporting the threshold of acceptable reliability and reliability estimates for observed scores, 2) specifying the approach used to estimate reliability, and 3) justifying how trial-count minima were chosen. A reliability threshold for internal consistency of at least 0.70 is recommended, and a threshold of 0.80 is preferred. The review also advocates the use of generalizability theory for estimating score dependability (the generalizability theory analog to reliability) as an improvement on classical test theory reliability estimates, suggesting that the latter is less well suited to ERP research. To facilitate the calculation and reporting of dependability estimates, an open-source Matlab program, the ERP Reliability Analysis Toolbox, is presented. Copyright © 2016 Elsevier B.V. All rights reserved.
Near-Infrared (0.67-4.7 microns) Optical Constants Estimated for Montmorillonite
NASA Technical Reports Server (NTRS)
Roush, T. L.
2005-01-01
Various models of the reflectance from particulate surfaces are used for interpretation of remote sensing data of solar system objects. These models rely upon the real (n) and imaginary (k) refractive indices of the materials. Such values are limited for commonly encountered silicates at visual and near-infrared wavelengths (lambda, 0.4-5 microns). Availability of optical constants for candidate materials allows more thorough modeling of the observations obtained by Earth-based telescopes and spacecraft. Two approaches for determining the absorption coefficient (alpha=2pik/lambda) from reflectance measurements of particulates have been described; one relies upon Kubelka-Munk theory and the other Hapke theory. Both have been applied to estimate alpha and k for various materials. Neither enables determination of the wavelength dependence of n, n=f(lambda). Thus, a mechanism providing this ability is desirable. Using Hapke-theory to estimate k from reflectance measurements requires two additional quantities be known or assumed: 1) n=f(lambda) and 2) d, the sample particle diameter. Typically n is assumed constant (c) or modestly varying with lambda; referred to here as n(sub 0). Assuming n(sub 0), at each lambda an estimate of k is used to calculate the reflectance and is iteratively adjusted until the difference between the model and measured reflectance is minimized. The estimated k's (k(sub 1)) are the final results, and this concludes the typical analysis.
Experimental Characterization of Supercavitating Finds Piercing a Ventilated Supercavity
2013-08-05
for a Flat Plate Hydrofoil vs. Angle of Attack and Cavitation Number using Wu’s Free Streamline Theory (Wu, 1955). 21 2.3 Estimated Lift and Drag for...degrees. 94 4.52 Comparison of theory and measured lift coefficients, 2 inch chord, γ = 0o, large cavitator. 95 4.53 Comparison of theory and measured... lift coefficients, 2 inch chord, γ = 45o, small cavitator 95 4.54 Comparison of theory and measured drag coefficients, 2 inch chord, γ = 0o, large
An information theory framework for dynamic functional domain connectivity.
Vergara, Victor M; Miller, Robyn; Calhoun, Vince
2017-06-01
Dynamic functional network connectivity (dFNC) analyzes time evolution of coherent activity in the brain. In this technique dynamic changes are considered for the whole brain. This paper proposes an information theory framework to measure information flowing among subsets of functional networks call functional domains. Our method aims at estimating bits of information contained and shared among domains. The succession of dynamic functional states is estimated at the domain level. Information quantity is based on the probabilities of observing each dynamic state. Mutual information measurement is then obtained from probabilities across domains. Thus, we named this value the cross domain mutual information (CDMI). Strong CDMIs were observed in relation to the subcortical domain. Domains related to sensorial input, motor control and cerebellum form another CDMI cluster. Information flow among other domains was seldom found. Other methods of dynamic connectivity focus on whole brain dFNC matrices. In the current framework, information theory is applied to states estimated from pairs of multi-network functional domains. In this context, we apply information theory to measure information flow across functional domains. Identified CDMI clusters point to known information pathways in the basal ganglia and also among areas of sensorial input, patterns found in static functional connectivity. In contrast, CDMI across brain areas of higher level cognitive processing follow a different pattern that indicates scarce information sharing. These findings show that employing information theory to formally measured information flow through brain domains reveals additional features of functional connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
Aligning the Measurement of Microbial Diversity with Macroecological Theory
Stegen, James C.; Hurlbert, Allen H.; Bond-Lamberty, Ben; ...
2016-09-23
The number of microbial operational taxonomic units (OTUs) within a community is akin to species richness within plant/animal (‘macrobial’) systems. A large literature documents OTU richness patterns, drawing comparisons to macrobial theory. There is, however, an unrecognized fundamental disconnect between OTU richness and macrobial theory: OTU richness is commonly estimated on a per-individual basis, while macrobial richness is estimated per-area. Furthermore, the range or extent of sampled environmental conditions can strongly influence a study’s outcomes and conclusions, but this is not commonly addressed when studying OTU richness. Here we (i) propose a new sampling approach that estimates OTU richness per-massmore » of soil, which results in strong support for species energy theory, (ii) use data reduction to show how support for niche conservatism emerges when sampling across a restricted range of environmental conditions, and (iii) show how additional insights into drivers of OTU richness can be generated by combining different sampling methods while simultaneously considering patterns that emerge by restricting the range of environmental conditions. We propose that a more rigorous connection between microbial ecology and macrobial theory can be facilitated by exploring how changes in OTU richness units and environmental extent influence outcomes of data analysis. While fundamental differences between microbial and macrobial systems persist (e.g., species concepts), we suggest that closer attention to units and scale provide tangible and immediate improvements to our understanding of the processes governing OTU richness and how those processes relate to drivers of macrobial species richness.« less
An analysis of possible applications of fuzzy set theory to the actuarial credibility theory
NASA Technical Reports Server (NTRS)
Ostaszewski, Krzysztof; Karwowski, Waldemar
1992-01-01
In this work, we review the basic concepts of actuarial credibility theory from the point of view of introducing applications of the fuzzy set-theoretic method. We show how the concept of actuarial credibility can be modeled through the fuzzy set membership functions and how fuzzy set methods, especially fuzzy pattern recognition, can provide an alternative tool for estimating credibility.
Maximum likelihood techniques applied to quasi-elastic light scattering
NASA Technical Reports Server (NTRS)
Edwards, Robert V.
1992-01-01
There is a necessity of having an automatic procedure for reliable estimation of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error estimates can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum Likelihood Estimation (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error estimates for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then estimating particle size parameters using a modified histogram approach.
Combining statistical inference and decisions in ecology
Williams, Perry J.; Hooten, Mevin B.
2016-01-01
Statistical decision theory (SDT) is a sub-field of decision theory that formally incorporates statistical investigation into a decision-theoretic framework to account for uncertainties in a decision problem. SDT provides a unifying analysis of three types of information: statistical results from a data set, knowledge of the consequences of potential choices (i.e., loss), and prior beliefs about a system. SDT links the theoretical development of a large body of statistical methods including point estimation, hypothesis testing, and confidence interval estimation. The theory and application of SDT have mainly been developed and published in the fields of mathematics, statistics, operations research, and other decision sciences, but have had limited exposure in ecology. Thus, we provide an introduction to SDT for ecologists and describe its utility for linking the conventionally separate tasks of statistical investigation and decision making in a single framework. We describe the basic framework of both Bayesian and frequentist SDT, its traditional use in statistics, and discuss its application to decision problems that occur in ecology. We demonstrate SDT with two types of decisions: Bayesian point estimation, and an applied management problem of selecting a prescribed fire rotation for managing a grassland bird species. Central to SDT, and decision theory in general, are loss functions. Thus, we also provide basic guidance and references for constructing loss functions for an SDT problem.
To compute lightness, illumination is not estimated, it is held constant.
Gilchrist, Alan L
2018-05-03
The light reaching the eye from a surface does not indicate the black-gray-white shade of a surface (called lightness) because the effects of illumination level are confounded with the reflectance of the surface. Rotating a gray paper relative to a light source alters its luminance (intensity of light reaching the eye) but the lightness of the paper remains relatively constant. Recent publications have argued, as had Helmholtz (1866/1924), that the visual system unconsciously estimates the direction and intensity of the light source. We report experiments in which this theory was pitted against an alternative theory according to which illumination level and surface reflectance are disentangled by comparing only those surfaces that are equally illuminated, in other words, by holding illumination level constant. A 3-dimensional scene was created within which the rotation of a target surface would be expected to become darker gray according to the lighting estimation theory, but lighter gray according to the equi-illumination comparison theory, with results clearly favoring the latter. In a further experiment cues held to indicate light source direction (cast shadows, attached shadows, and glossy highlights) were completely eliminated and yet this had no effect on the results. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Five Methods for Estimating Angoff Cut Scores with IRT
ERIC Educational Resources Information Center
Wyse, Adam E.
2017-01-01
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
Estimation of Effect Size from a Series of Experiments Involving Paired Comparisons.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
1993-01-01
A distribution theory is derived for a G. V. Glass-type (1976) estimator of effect size from studies involving paired comparisons. The possibility of combining effect sizes from studies involving a mixture of related and unrelated samples is also explored. Resulting estimates are illustrated using data from previous psychiatric research. (SLD)
A Nonparametric Approach to Estimate Classification Accuracy and Consistency
ERIC Educational Resources Information Center
Lathrop, Quinn N.; Cheng, Ying
2014-01-01
When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…
High Temperature Tensile Properties of Unidirectional Hi-Nicalon/Celsian Composites In Air
NASA Technical Reports Server (NTRS)
Gyekenyesi, John Z.; Bansal, Narottam P.
2000-01-01
High temperature tensile properties of unidirectional BN/SiC-coated Hi-Nicalon SiC fiber reinforced celsian matrix composites have been measured from room temperature to 1200 C (2190 F) in air. Young's modulus, the first matrix cracking stress, and the ultimate strength decreased from room temperature to 1200 C (2190 F). The applicability of various micromechanical models, in predicting room temperature values of various mechanical properties for this CMC, has also been investigated. The simple rule of mixtures produced an accurate estimate of the primary composite modulus. The first matrix cracking stress estimated from ACK theory was in good agreement with the experimental value. The modified fiber bundle failure theory of Evans gave a good estimate of the ultimate strength.
Minimum Expected Risk Estimation for Near-neighbor Classification
2006-04-01
We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights
Attitude estimation of earth orbiting satellites by decomposed linear recursive filters
NASA Technical Reports Server (NTRS)
Kou, S. R.
1975-01-01
Attitude estimation of earth orbiting satellites (including Large Space Telescope) subjected to environmental disturbances and noises was investigated. Modern control and estimation theory is used as a tool to design an efficient estimator for attitude estimation. Decomposed linear recursive filters for both continuous-time systems and discrete-time systems are derived. By using this accurate estimation of the attitude of spacecrafts, state variable feedback controller may be designed to achieve (or satisfy) high requirements of system performance.
Lee, Sanghun; Park, Sung Soo
2011-11-03
Dielectric constants of electrolytic organic solvents are calculated employing nonpolarizable Molecular Dynamics simulation with Electronic Continuum (MDEC) model and Density Functional Theory. The molecular polarizabilities are obtained by the B3LYP/6-311++G(d,p) level of theory to estimate high-frequency refractive indices while the densities and dipole moment fluctuations are computed using nonpolarizable MD simulations. The dielectric constants reproduced from these procedures are evaluated to provide a reliable approach for estimating the experimental data. An additional feature, two representative solvents which have similar molecular weights but are different dielectric properties, i.e., ethyl methyl carbonate and propylene carbonate, are compared using MD simulations and the distinctly different dielectric behaviors are observed at short times as well as at long times.
Qualitative methods in quantum theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Migdal, A.B.
The author feels that the solution of most problems in theoretical physics begins with the application of qualitative methods - dimensional estimates and estimates made from simple models, the investigation of limiting cases, the use of the analytic properties of physical quantities, etc. This book proceeds in this spirit, rather than in a formal, mathematical way with no traces of the sweat involved in the original work left to show. The chapters are entitled Dimensional and model approximations, Various types of perturbation theory, The quasi-classical approximation, Analytic properties of physical quantities, Methods in the many-body problem, and Qualitative methods inmore » quantum field theory. Each chapter begins with a detailed introduction, in which the physical meaning of the results obtained in that chapter is explained in a simple way. 61 figures. (RWR)« less
Martin, Thomas E.; Riordan, Margaret M.; Repin, Rimi; Mouton, James C.; Blake, William M.
2017-01-01
AimAdult survival is central to theories explaining latitudinal gradients in life history strategies. Life history theory predicts higher adult survival in tropical than north temperate regions given lower fecundity and parental effort. Early studies were consistent with this prediction, but standard-effort netting studies in recent decades suggested that apparent survival rates in temperate and tropical regions strongly overlap. Such results do not fit with life history theory. Targeted marking and resighting of breeding adults yielded higher survival estimates in the tropics, but this approach is thought to overestimate survival because it does not sample social and age classes with lower survival. We compared the effect of field methods on tropical survival estimates and their relationships with life history traits.LocationSabah, Malaysian Borneo.Time period2008–2016.Major taxonPasseriformes.MethodsWe used standard-effort netting and resighted individuals of all social and age classes of 18 tropical songbird species over 8 years. We compared apparent survival estimates between these two field methods with differing analytical approaches.ResultsEstimated detection and apparent survival probabilities from standard-effort netting were similar to those from other tropical studies that used standard-effort netting. Resighting data verified that a high proportion of individuals that were never recaptured in standard-effort netting remained in the study area, and many were observed breeding. Across all analytical approaches, addition of resighting yielded substantially higher survival estimates than did standard-effort netting alone. These apparent survival estimates were higher than for temperate zone species, consistent with latitudinal differences in life histories. Moreover, apparent survival estimates from addition of resighting, but not from standard-effort netting alone, were correlated with parental effort as measured by egg temperature across species.Main conclusionsInclusion of resighting showed that standard-effort netting alone can negatively bias apparent survival estimates and obscure life history relationships across latitudes and among tropical species.
Decision analysis with cumulative prospect theory.
Bayoumi, A M; Redelmeier, D A
2000-01-01
Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.
Flight Mechanics/Estimation Theory Symposium
NASA Technical Reports Server (NTRS)
Fuchs, A. J. (Editor)
1980-01-01
Methods of determining satellite orbit and attitude parameters are considered. The Goddard Trajectory Determination System, the Global Positioning System, and the Tracking and Data Relay Satellites are among the satellite navigation systems discussed. Satellite perturbation theory, orbit/attitude determination using landmark data, and star measurements are also covered.
ERIC Educational Resources Information Center
EASTCONN Regional Educational Services Center, North Windham, CT.
This secondary carpentry program is designed for grades 10, 11, and 12. Sophomores learn applicable trade procedures and practices, use of tools and materials, products, and devices common to the trade. Juniors receive work experience and a continuing theory program. Seniors are given advanced theory, cost estimation, materials listing, job…
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Mann, Michael J.
1992-01-01
A survey of research on drag-due-to-lift minimization at supersonic speeds, including a study of the effectiveness of current design and analysis methods was conducted. The results show that a linearized theory analysis with estimated attainable thrust and vortex force effects can predict with reasonable accuracy the lifting efficiency of flat wings. Significantly better wing performance can be achieved through the use of twist and camber. Although linearized theory methods tend to overestimate the amount of twist and camber required for a given application and provide an overly optimistic performance prediction, these deficiencies can be overcome by implementation of recently developed empirical corrections. Numerous examples of the correlation of experiment and theory are presented to demonstrate the applicability and limitations of linearized theory methods with and without empirical corrections. The use of an Euler code for the estimation of aerodynamic characteristics of a twisted and cambered wing and its application to design by iteration are discussed.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Fundamentals of health risk assessment. Use, derivation, validity and limitations of safety indices.
Putzrath, R M; Wilson, J D
1999-04-01
We investigated the way results of human health risk assessments are used, and the theory used to describe those methods, sometimes called the "NAS paradigm." Contrary to a key tenet of that theory, current methods have strictly limited utility. The characterizations now considered standard, Safety Indices such as "Acceptable Daily Intake," "Reference Dose," and so on, usefully inform only decisions that require a choice between two policy alternatives (e.g., approve a food additive or not), decided solely on the basis of a finding of safety. Risk is characterized as the quotient of one of these Safety Indices divided by an estimate of exposure: a quotient greater than one implies that the situation may be considered safe. Such decisions are very widespread, both in the U.S. federal government and elsewhere. No current method is universal; different policies lead to different practices, for example, in California's "Proposition 65," where statutory provisions specify some practices. Further, an important kind of human health risk assessment is not recognized by this theory: this kind characterizes risk as likelihood of harm, given estimates of exposure consequent to various decision choices. Likelihood estimates are necessary whenever decision makers have many possible decision choices and must weigh more than two societal values, such as in EPA's implementation of "conventional air pollutants." These estimates can not be derived using current methods; different methods are needed. Our analysis suggests changes needed in both the theory and practice of human health risk assessment, and how what is done is depicted.
SU-E-QI-08: Fourier Properties of Cone Beam CT Projection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, T; UT Southwestern Medical Center, Dallas, TX; Yan, H
Purpose: To explore the Fourier properties of cone beam CT (CBCT) projections and apply the property to directly estimate noise level of CBCT projections without any prior information. Methods: By utilizing the property of Bessel function, we derivate the Fourier properties of the CBCT projections for an arbitrary point object. It is found that there exists a double-wedge shaped region in the Fourier space where the intensity is approximately zero. We further derivate the Fourier properties of independent noise added to CBCT projections. The expectation of the square of the module in any point of the Fourier space is constantmore » and the value approximately equals to noise energy. We further validate the theory in numerical simulations for both a delta function object and a NCAT phantom with different levels of noise added. Results: Our simulation confirmed the existence of the double-wedge shaped region in Fourier domain for the x-ray projection image. The boundary locations of this region agree well with theoretical predictions. In the experiments of estimating noise level, the mean relative error between the theory estimation and the ground truth values is 2.697%. Conclusion: A novel theory on the Fourier properties of CBCT projections has been discovered. Accurate noise level estimation can be achieved by applying this theory directly to the measured CBCT projections. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011) and China Scholarship Council.« less
The estimation of genetic divergence
NASA Technical Reports Server (NTRS)
Holmquist, R.; Conroy, T.
1981-01-01
Consideration is given to the criticism of Nei and Tateno (1978) of the REH (random evolutionary hits) theory of genetic divergence in nucleic acids and proteins, and to their proposed alternative estimator of total fixed mutations designated X2. It is argued that the assumption of nonuniform amino acid or nucleotide substitution will necessarily increase REH estimates relative to those made for a model where each locus has an equal likelihood of fixing mutations, thus the resulting value will not be an overestimation. The relative values of X2 and measures calculated on the basis of the PAM and REH theories for the number of nucleotide substitutions necessary to explain a given number of observed amino acid differences between two homologous proteins are compared, and the smaller values of X2 are attributed to (1) a mathematical model based on the incorrect assumption that an entire structural gene is free to fix mutations and (2) the assumptions of different numbers of variable codons for the X2 and REH calculations. Results of a repeat of the computer simulations of Nei and Tateno are presented which, in contrast to the original results, confirm the REH theory. It is pointed out that while a negative correlation is observed between estimations of the fixation intensity per varion and the number of varions for a given pair of sequences, the correlation between the two fixation intensities and varion numbers of two different pairs of sequences need not be negative. Finally, REH theory is used to resolve a paradox concerning the high rate of covarion turnover and the nature of general function sites as permanent covarions.
2013-01-29
of modern portfolio and control theory . The reformulation allows for possible changes in estimated quantities (e.g., due to market shifts in... Portfolio Theory (MPT). Final Report: NPS award N00244-11-1-0003 5 Extending CEM and Markov: Agent-Based Modeling Approach Research conducted in the...integration and acquisition from a robust portfolio theory standpoint. Robust portfolio management methodologies have been widely used by financial
Democratization of Nanoscale Imaging and Sensing Tools Using Photonics
2015-06-12
representative angular scattering pattern recorded on the cell phone. (b) Measured (black) and Mie theory fitted (red) angle-dependent scattering...sample onto the cell phone image sensor (Figure 3a). The one- dimensional radial scattering profile was then fitted with Mie theory to estimate the...quantitatively well-understood, as the experimental measure- ments closely match the predictions of our theory and simulations.69,84 Furthermore, the signal
The Einstein-Hilbert gravitation with minimum length
NASA Astrophysics Data System (ADS)
Louzada, H. L. C.
2018-05-01
We study the Einstein-Hilbert gravitation with the deformed Heisenberg algebra leading to the minimum length, with the intention to find and estimate the corrections in this theory, clarifying whether or not it is possible to obtain, by means of the minimum length, a theory, in D=4, which is causal, unitary and provides a massive graviton. Therefore, we will calculate and analyze the dispersion relationships of the considered theory.
Tsubakita, Takashi; Shimazaki, Kazuyo; Ito, Hiroshi; Kawazoe, Nobuo
2017-10-30
The Utrecht Work Engagement Scale for Students has been used internationally to assess students' academic engagement, but it has not been analyzed via item response theory. The purpose of this study was to conduct an item response theory analysis of the Japanese version of the Utrecht Work Engagement Scale for Students translated by authors. Using a two-parameter model and Samejima's graded response model, difficulty and discrimination parameters were estimated after confirming the factor structure of the scale. The 14 items on the scale were analyzed with a sample of 3214 university and college students majoring medical science, nursing, or natural science in Japan. The preliminary parameter estimation was conducted with the two parameter model, and indicated that three items should be removed because there were outlier parameters. Final parameter estimation was conducted using the survived 11 items, and indicated that all difficulty and discrimination parameters were acceptable. The test information curve suggested that the scale better assesses higher engagement than average engagement. The estimated parameters provide a basis for future comparative studies. The results also suggested that a 7-point Likert scale is too broad; thus, the scaling should be modified to fewer graded scaling structure.
NASA Astrophysics Data System (ADS)
Ghanbarian, Behzad; Berg, Carl F.
2017-09-01
Accurate quantification of formation resistivity factor F (also called formation factor) provides useful insight into connectivity and pore space topology in fully saturated porous media. In particular the formation factor has been extensively used to estimate permeability in reservoir rocks. One of the widely applied models to estimate F is Archie's law (F = ϕ- m in which ϕ is total porosity and m is cementation exponent) that is known to be valid in rocks with negligible clay content, such as clean sandstones. In this study we compare formation factors determined by percolation and effective-medium theories as well as Archie's law with numerical simulations of electrical resistivity on digital rock models. These digital models represent Bentheimer and Fontainebleau sandstones and are derived either by reconstruction or directly from micro-tomographic images. Results show that the universal quadratic power law from percolation theory accurately estimates the calculated formation factor values in network models over the entire range of porosity. However, it crosses over to the linear scaling from the effective-medium approximation at the porosity of 0.75 in grid models. We also show that the effect of critical porosity, disregarded in Archie's law, is nontrivial, and the Archie model inaccurately estimates the formation factor in low-porosity homogeneous sandstones.
Item Response Theory Modeling of the Philadelphia Naming Test.
Fergadiotis, Gerasimos; Kellough, Stacey; Hula, William D
2015-06-01
In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating explanatory variables to item difficulty. This article describes the statistical model underlying the computer adaptive PNT presented in a companion article (Hula, Kellough, & Fergadiotis, 2015). Using archival data, we evaluated the fit of the PNT to 1- and 2-parameter logistic models and examined the precision of the resulting parameter estimates. We regressed the item difficulty estimates on three predictor variables: word length, age of acquisition, and contextual diversity. The 2-parameter logistic model demonstrated marginally better fit, but the fit of the 1-parameter logistic model was adequate. Precision was excellent for both person ability and item difficulty estimates. Word length, age of acquisition, and contextual diversity all independently contributed to variance in item difficulty. Item-response-theory methods can be productively used to analyze and quantify anomia severity in aphasia. Regression of item difficulty on lexical variables supported the validity of the PNT and interpretation of anomia severity scores in the context of current word-finding models.
Neurology objective structured clinical examination reliability using generalizability theory
Park, Yoon Soo; Lukas, Rimas V.; Brorson, James R.
2015-01-01
Objectives: This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Methods: Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Results: Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. Conclusions: This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. PMID:26432851
Neurology objective structured clinical examination reliability using generalizability theory.
Blood, Angela D; Park, Yoon Soo; Lukas, Rimas V; Brorson, James R
2015-11-03
This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. © 2015 American Academy of Neurology.
Testing students' e-learning via Facebook through Bayesian structural equation modeling.
Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.
Testing students’ e-learning via Facebook through Bayesian structural equation modeling
Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad
2017-01-01
Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Y. M.; College of Physics and Technology, Guangxi Normal University, Guilin, GuangXi; Chen, L. Y.
2014-05-07
A remarkable magnetostriction λ{sub 111} as large as 6700 ppm was found at 70 K in PrFe{sub 1.9} alloy. This value is even larger than the theoretical maximum of 5600 ppm estimated by the Steven's equivalent operator method. The temperature dependence of λ{sub 111} for PrFe{sub 1.9} and TbFe{sub 2} alloys follows well with the single-ion theory rule, which yields giant estimated λ{sub 111} values of about 8000 and 4200 ppm for PrFe{sub 1.9} and TbFe{sub 2} alloys, respectively, at 0 K. The easy magnetization direction of PrFe{sub 1.9} changes from [111] to [100] as temperature decreases, which leads to the abnormal decrease of themore » magnetostriction λ. The rare earth sublattice moment increases sharply in PrFe{sub 1.9} alloy with decreasing temperature, resulting in the remarkably largest estimated value of λ{sub 111} at 0 K according to the single-ion theory.« less
Estimating seat belt effectiveness using matched-pair cohort methods.
Cummings, Peter; Wells, James D; Rivara, Frederick P
2003-01-01
Using US data for 1986-1998 fatal crashes, we employed matched-pair analysis methods to estimate that the relative risk of death among belted compared with unbelted occupants was 0.39 (95% confidence interval (CI) 0.37-0.41). This differs from relative risk estimates of about 0.55 in studies that used crash data collected prior to 1986. Using 1975-1998 data, we examined and rejected three theories that might explain the difference between our estimate and older estimates: (1) differences in the analysis methods; (2) changes related to car model year; (3) changes in crash characteristics over time. A fourth theory, that the introduction of seat belt laws would induce some survivors to claim belt use when they were not restrained, could explain part of the difference in our estimate and older estimates; but even in states without seat belt laws, from 1986 through 1998, the relative risk estimate was 0.45 (95% CI 0.39-0.52). All of the difference between our estimate and older estimates could be explained by some misclassification of seat belt use. Relative risk estimates would move away from 1, toward their true value, if misclassification of both the belted and unbelted decreased over time, or if the degree of misclassification remained constant, as the prevalence of belt use increased. We conclude that estimates of seat belt effects based upon data prior to 1986 may be biased toward 1 by misclassification.
ERIC Educational Resources Information Center
Finch, Holmes
2010-01-01
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Estimating Marginal Returns to Education. NBER Working Paper No. 16474
ERIC Educational Resources Information Center
Carneiro, Pedro; Heckman, James J.; Vytlacil, Edward J.
2010-01-01
This paper estimates the marginal returns to college for individuals induced to enroll in college by different marginal policy changes. The recent instrumental variables literature seeks to estimate this parameter, but in general it does so only under strong assumptions that are tested and found wanting. We show how to utilize economic theory and…
Vehicle Sprung Mass Estimation for Rough Terrain
2011-03-01
distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended
Robustness of Value-Added Analysis of School Effectiveness. Research Report. ETS RR-08-22
ERIC Educational Resources Information Center
Braun, Henry; Qu, Yanxuan
2008-01-01
This paper reports on a study conducted to investigate the consistency of the results between 2 approaches to estimating school effectiveness through value-added modeling. Estimates of school effects from the layered model employing item response theory (IRT) scaled data are compared to estimates derived from a discrete growth model based on the…
Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly
ERIC Educational Resources Information Center
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.
2013-01-01
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
ERIC Educational Resources Information Center
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates
ERIC Educational Resources Information Center
Kim, Seonghoon
2012-01-01
Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…
ERIC Educational Resources Information Center
Wang, Wen-Chung
2004-01-01
The Pearson correlation is used to depict effect sizes in the context of item response theory. Amultidimensional Rasch model is used to directly estimate the correlation between latent traits. Monte Carlo simulations were conducted to investigate whether the population correlation could be accurately estimated and whether the bootstrap method…
Explicating Individual Training Decisions
ERIC Educational Resources Information Center
Walter, Marcel; Mueller, Normann
2015-01-01
In this paper, we explicate individual training decisions. For this purpose, we propose a framework based on instrumentality theory, a psychological theory of motivation that has frequently been applied to individual occupational behavior. To test this framework, we employ novel German individual data and estimate the effect of subjective expected…
DOT National Transportation Integrated Search
2009-10-19
We used signal detection theory to examine if grade crossing warning devices were effective because they increased drivers' sensitivity to a train's approach or because they encouraged drivers to stop. We estimated d' and a for eight warning devices ...
A parameter estimation algorithm for spatial sine testing - Theory and evaluation
NASA Technical Reports Server (NTRS)
Rost, R. W.; Deblauwe, F.
1992-01-01
This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.
NASA Astrophysics Data System (ADS)
Jiao, J.; Trautz, A.; Zhang, Y.; Illangasekera, T.
2017-12-01
Subsurface flow and transport characterization under data-sparse condition is addressed by a new and computationally efficient inverse theory that simultaneously estimates parameters, state variables, and boundary conditions. Uncertainty in static data can be accounted for while parameter structure can be complex due to process uncertainty. The approach has been successfully extended to inverting transient and unsaturated flows as well as contaminant source identification under unknown initial and boundary conditions. In one example, by sampling numerical experiments simulating two-dimensional steady-state flow in which tracer migrates, a sequential inversion scheme first estimates the flow field and permeability structure before the evolution of tracer plume and dispersivities are jointly estimated. Compared to traditional inversion techniques, the theory does not use forward simulations to assess model-data misfits, thus the knowledge of the difficult-to-determine site boundary condition is not required. To test the general applicability of the theory, data generated during high-precision intermediate-scale experiments (i.e., a scale intermediary to the field and column scales) in large synthetic aquifers can be used. The design of such experiments is not trivial as laboratory conditions have to be selected to mimic natural systems in order to provide useful data, thus requiring a variety of sensors and data collection strategies. This paper presents the design of such an experiment in a synthetic, multi-layered aquifer with dimensions of 242.7 x 119.3 x 7.7 cm3. Different experimental scenarios that will generate data to validate the theory are presented.
NASA Astrophysics Data System (ADS)
Nagaoka, Hiroshi
We study the problem of minimizing a quadratic quantity defined for given two Hermitian matrices X, Y and a positive-definite Hermitian matrix. This problem is reduced to the simultaneous diagonalization of X, Y when XY = YX. We derive a lower bound for the quantity, and in some special cases solve the problem by showing that the lower bound is achievable. This problem is closely related to a simultaneous measurement of quantum mechanical observables which are not commuting and has an application in the theory of quantum state estimation.
Inclusion of Theta(12) dependence in the Coulomb-dipole theory of the ionization threshold
NASA Technical Reports Server (NTRS)
Srivastava, M. K.; Temkin, A.
1991-01-01
The Coulomb-dipole (CD) theory of the electron-atom impact-ionization threshold law is extended to include the full electronic repulsion. It is found that the threshold law is altered to a form in contrast to the previous angular-independent model. A second energy regime, is also identified wherein the 'threshold' law reverts to its angle-independent form. In the final part of the paper the dipole parameter is estimated to be about 28. This yields numerical estimates of E(a) = about 0.0003 and E(b) = about 0.25 eV.
Parameter estimation in nonlinear distributed systems - Approximation theory and convergence results
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract approximation framework and convergence theory is described for Galerkin approximations applied to inverse problems involving nonlinear distributed parameter systems. Parameter estimation problems are considered and formulated as the minimization of a least-squares-like performance index over a compact admissible parameter set subject to state constraints given by an inhomogeneous nonlinear distributed system. The theory applies to systems whose dynamics can be described by either time-independent or nonstationary strongly maximal monotonic operators defined on a reflexive Banach space which is densely and continuously embedded in a Hilbert space. It is demonstrated that if readily verifiable conditions on the system's dependence on the unknown parameters are satisfied, and the usual Galerkin approximation assumption holds, then solutions to the approximating problems exist and approximate a solution to the original infinite-dimensional identification problem.
Kinetic theory for DNA melting with vibrational entropy
NASA Astrophysics Data System (ADS)
Sensale, Sebastian; Peng, Zhangli; Chang, Hsueh-Chia
2017-10-01
By treating DNA as a vibrating nonlinear lattice, an activated kinetic theory for DNA melting is developed to capture the breakage of the hydrogen bonds and subsequent softening of torsional and bending vibration modes. With a coarse-grained lattice model, we identify a key bending mode with GHz frequency that replaces the hydrogen vibration modes as the dominant out-of-phase phonon vibration at the transition state. By associating its bending modulus to a universal in-phase bending vibration modulus at equilibrium, we can hence estimate the entropic change in the out-of-phase vibration from near-equilibrium all-atom simulations. This and estimates of torsional and bending entropy changes lead to the first predictive and sequence-dependent theory with good quantitative agreement with experimental data for the activation energy of melting of short DNA molecules without intermediate hairpin structures.
Structure and energetics of Cr(CO)6 and Cr(CO)5
NASA Technical Reports Server (NTRS)
Barnes, Leslie A.; Liu, Bowen; Lindh, Roland
1993-01-01
The geometric structures and energetics of Cr(CO)6 and Cr(CO)5 are determined at the modified coupled-pair functional, single and double excitation coupled-cluster (CCSD), and CCSD(T) levels of theory. For Cr(CO)6, the structure and force constants for the totally symmetric representation are in good agreement with experimental data once basis set constants are taken into account. In the largest basis set at the CCSD(T) level of theory, the total binding energy of CR(CO)6 is estimated at around 140 kcal/mol, or about 86 percent of the experimental value. In contrast, the first bond energy of Cr(CO)6 is very well described at the CCSD(T) level of theory, with the best estimated value of 38 kcal/mol being within the experimental uncertainty.
NASA Astrophysics Data System (ADS)
Tadano, Terumasa; Tsuneyuki, Shinji
2015-12-01
We show a first-principles approach for analyzing anharmonic properties of lattice vibrations in solids. We firstly extract harmonic and anharmonic force constants from accurate first-principles calculations based on the density functional theory. Using the many-body perturbation theory of phonons, we then estimate the phonon scattering probability due to anharmonic phonon-phonon interactions. We show the validity of the approach by computing the lattice thermal conductivity of Si, a typical covalent semiconductor, and selected thermoelectric materials PbTe and Bi2Te3 based on the Boltzmann transport equation. We also show that the phonon lifetime and the lattice thermal conductivity of the high-temperature phase of SrTiO3 can be estimated by employing the perturbation theory on top of the solution of the self-consistent phonon equation.
M-Estimation for Discrete Data. Asymptotic Distribution Theory and Implications.
1985-10-01
outlying data points, can be specified in a direct way since the influence function of an IM-estimator is proportional to its score function; see HamDel...consistently estimates - when the model is correct. Suppose now that ac RI. The influence function at F of an M-estimator for 3 has the form 2(x,S) = d/ P ("e... influence function at F . This is assuming, of course, that the estimator is asymototically normal at Fe. The truncation point c(f) determines the bounds
M-Estimation for Discrete Data: Asymptotic Distribution Theory and Implications.
1985-11-01
the influence function of an M-estimator is proportional to its score function; see Hampel (1974) or Huber (1981) for details. Surprisingly, M...consistently estimates 0 when the model is correct. Suppose now that OcR The influence function at F of an M-estimator for e has the form a(x,e...variance and the bound on the influence function at F This is assuming, of course, that the estimator is asymptotically normal at Fe. 6’ The truncation
Modeling Incorrect Responses to Multiple-Choice Items with Multilinear Formula Score Theory.
ERIC Educational Resources Information Center
Drasgow, Fritz; And Others
This paper addresses the information revealed in incorrect option selection on multiple choice items. Multilinear Formula Scoring (MFS), a theory providing methods for solving psychological measurement problems of long standing, is first used to estimate option characteristic curves for the Armed Services Vocational Aptitude Battery Arithmetic…
ERIC Educational Resources Information Center
Grochowalski, Joseph H.
2015-01-01
Component Universe Score Profile analysis (CUSP) is introduced in this paper as a psychometric alternative to multivariate profile analysis. The theoretical foundations of CUSP analysis are reviewed, which include multivariate generalizability theory and constrained principal components analysis. Because CUSP is a combination of generalizability…
Estimating Solar Proton Flux at LEO From a Geomagnetic Cutoff Model
2015-07-14
simple shadow cones (using nomenclature from Stormer theory of particle motion in a dipole magnetic field [6]), that result from particles trajectories...basic Stormer theory [7]. However, in LEO the changes would be small relative to uncertainties in the model and therefore unnecessary. If the model were
The Long-Term Sustainability of Different Item Response Theory Scaling Methods
ERIC Educational Resources Information Center
Keller, Lisa A.; Keller, Robert R.
2011-01-01
This article investigates the accuracy of examinee classification into performance categories and the estimation of the theta parameter for several item response theory (IRT) scaling techniques when applied to six administrations of a test. Previous research has investigated only two administrations; however, many testing programs equate tests…
Bayesian Estimation of Multi-Unidimensional Graded Response IRT Models
ERIC Educational Resources Information Center
Kuo, Tzu-Chun
2015-01-01
Item response theory (IRT) has gained an increasing popularity in large-scale educational and psychological testing situations because of its theoretical advantages over classical test theory. Unidimensional graded response models (GRMs) are useful when polytomous response items are designed to measure a unified latent trait. They are limited in…
Cognitive Diagnostic Attribute-Level Discrimination Indices
ERIC Educational Resources Information Center
Henson, Robert; Roussos, Louis; Douglas, Jeff; He, Xuming
2008-01-01
Cognitive diagnostic models (CDMs) model the probability of correctly answering an item as a function of an examinee's attribute mastery pattern. Because estimation of the mastery pattern involves more than a continuous measure of ability, reliability concepts introduced by classical test theory and item response theory do not apply. The cognitive…
NASA Technical Reports Server (NTRS)
Rodriguez, G. (Editor)
1983-01-01
Two general themes in the control of large space structures are addressed: control theory for distributed parameter systems and distributed control for systems requiring spatially-distributed multipoint sensing and actuation. Topics include modeling and control, stabilization, and estimation and identification.
Classification Consistency and Accuracy for Complex Assessments Using Item Response Theory
ERIC Educational Resources Information Center
Lee, Won-Chan
2010-01-01
In this article, procedures are described for estimating single-administration classification consistency and accuracy indices for complex assessments using item response theory (IRT). This IRT approach was applied to real test data comprising dichotomous and polytomous items. Several different IRT model combinations were considered. Comparisons…
Correlation between UV and IR cutoffs in quantum field theory and large extra dimensions
NASA Astrophysics Data System (ADS)
Cortés, J. L.
1999-04-01
A recently conjectured relationship between UV and IR cutoffs in an effective field theory without quantum gravity is generalized in the presence of large extra dimensions. Estimates for the corrections to the usual calculation of observables within quantum field theory are used to put very stringent limits, in some cases, on the characteristic scale of the additional compactified dimensions. Implications for the cosmological constant problem are also discussed.
Sonar Performance Estimation Model with Seismo-Acoustic Effects on Underwater Sound Propagation
1989-06-27
properties of 12 the bottom sediments. The ray theory is highly satisfactory to predict and explain some electromagnetic phenomena, and it is very useful in...erroneous transmission loss computations where acoustic interference occurs. However, his transmission loss calculations are made using ray theory which is...developed which treat some of these properties. Each model has its virtues and limitations. For high- frequency sound propagation the ray theory can
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
Flynn, Terry N; Louviere, Jordan J; Marley, Anthony AJ; Coast, Joanna; Peters, Tim J
2008-01-01
Background Researchers are increasingly investigating the potential for ordinal tasks such as ranking and discrete choice experiments to estimate QALY health state values. However, the assumptions of random utility theory, which underpin the statistical models used to provide these estimates, have received insufficient attention. In particular, the assumptions made about the decisions between living states and the death state are not satisfied, at least for some people. Estimated values are likely to be incorrectly anchored with respect to death (zero) in such circumstances. Methods Data from the Investigating Choice Experiments for the preferences of older people CAPability instrument (ICECAP) valuation exercise were analysed. The values (previously anchored to the worst possible state) were rescaled using an ordinal model proposed previously to estimate QALY-like values. Bootstrapping was conducted to vary artificially the proportion of people who conformed to the conventional random utility model underpinning the analyses. Results Only 26% of respondents conformed unequivocally to the assumptions of conventional random utility theory. At least 14% of respondents unequivocally violated the assumptions. Varying the relative proportions of conforming respondents in sensitivity analyses led to large changes in the estimated QALY values, particularly for lower-valued states. As a result these values could be either positive (considered to be better than death) or negative (considered to be worse than death). Conclusion Use of a statistical model such as conditional (multinomial) regression to anchor quality of life values from ordinal data to death is inappropriate in the presence of respondents who do not conform to the assumptions of conventional random utility theory. This is clearest when estimating values for that group of respondents observed in valuation samples who refuse to consider any living state to be worse than death: in such circumstances the model cannot be estimated. Only a valuation task requiring respondents to make choices in which both length and quality of life vary can produce estimates that properly reflect the preferences of all respondents. PMID:18945358
Sved, John A; Cameron, Emilie C; Gilchrist, A Stuart
2013-01-01
There is a substantial literature on the use of linkage disequilibrium (LD) to estimate effective population size using unlinked loci. The Ne estimates are extremely sensitive to the sampling process, and there is currently no theory to cope with the possible biases. We derive formulae for the analysis of idealised populations mating at random with multi-allelic (microsatellite) loci. The 'Burrows composite index' is introduced in a novel way with a 'composite haplotype table'. We show that in a sample of diploid size S, the mean value of x2 or r2 from the composite haplotype table is biased by a factor of 1-1/(2S-1)2, rather than the usual factor 1+1/(2S-1) for a conventional haplotype table. But analysis of population data using these formulae leads to Ne estimates that are unrealistically low. We provide theory and simulation to show that this bias towards low Ne estimates is due to null alleles, and introduce a randomised permutation correction to compensate for the bias. We also consider the effect of introducing a within-locus disequilibrium factor to r2, and find that this factor leads to a bias in the Ne estimate. However this bias can be overcome using the same randomised permutation correction, to yield an altered r2 with lower variance than the original r2, and one that is also insensitive to null alleles. The resulting formulae are used to provide Ne estimates on 40 samples of the Queensland fruit fly, Bactrocera tryoni, from populations with widely divergent Ne expectations. Linkage relationships are known for most of the microsatellite loci in this species. We find that there is little difference in the estimated Ne values from using known unlinked loci as compared to using all loci, which is important for conservation studies where linkage relationships are unknown.
Combining statistical inference and decisions in ecology.
Williams, Perry J; Hooten, Mevin B
2016-09-01
Statistical decision theory (SDT) is a sub-field of decision theory that formally incorporates statistical investigation into a decision-theoretic framework to account for uncertainties in a decision problem. SDT provides a unifying analysis of three types of information: statistical results from a data set, knowledge of the consequences of potential choices (i.e., loss), and prior beliefs about a system. SDT links the theoretical development of a large body of statistical methods, including point estimation, hypothesis testing, and confidence interval estimation. The theory and application of SDT have mainly been developed and published in the fields of mathematics, statistics, operations research, and other decision sciences, but have had limited exposure in ecology. Thus, we provide an introduction to SDT for ecologists and describe its utility for linking the conventionally separate tasks of statistical investigation and decision making in a single framework. We describe the basic framework of both Bayesian and frequentist SDT, its traditional use in statistics, and discuss its application to decision problems that occur in ecology. We demonstrate SDT with two types of decisions: Bayesian point estimation and an applied management problem of selecting a prescribed fire rotation for managing a grassland bird species. Central to SDT, and decision theory in general, are loss functions. Thus, we also provide basic guidance and references for constructing loss functions for an SDT problem. © 2016 by the Ecological Society of America.
Error estimation in the neural network solution of ordinary differential equations.
Filici, Cristian
2010-06-01
In this article a method of error estimation for the neural approximation of the solution of an Ordinary Differential Equation is presented. Some examples of the application of the method support the theory presented. Copyright 2010. Published by Elsevier Ltd.
Bernard R. Parresol; Charles E. Thomas
1996-01-01
In the wood utilization industry, both stem profile and biomass are important quantities. The two have traditionally been estimated separately. The introduction of a density-integral method allows for coincident estimation of stem profile and biomass, based on the calculus of mass theory, and provides an alternative to weight-ratio methodology. In the initial...
A Two-Stage Approach to Missing Data: Theory and Application to Auxiliary Variables
ERIC Educational Resources Information Center
Savalei, Victoria; Bentler, Peter M.
2009-01-01
A well-known ad-hoc approach to conducting structural equation modeling with missing data is to obtain a saturated maximum likelihood (ML) estimate of the population covariance matrix and then to use this estimate in the complete data ML fitting function to obtain parameter estimates. This 2-stage (TS) approach is appealing because it minimizes a…
ERIC Educational Resources Information Center
Wu, Yi-Fang
2015-01-01
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…
NASA Astrophysics Data System (ADS)
Huang, Shih-Yu; Deng, Yi; Wang, Jingfeng
2017-09-01
The maximum-entropy-production (MEP) model of surface heat fluxes, based on contemporary non-equilibrium thermodynamics, information theory, and atmospheric turbulence theory, is used to re-estimate the global surface heat fluxes. The MEP model predicted surface fluxes automatically balance the surface energy budgets at all time and space scales without the explicit use of near-surface temperature and moisture gradient, wind speed and surface roughness data. The new MEP-based global annual mean fluxes over the land surface, using input data of surface radiation, temperature data from National Aeronautics and Space Administration-Clouds and the Earth's Radiant Energy System (NASA CERES) supplemented by surface specific humidity data from the Modern-Era Retrospective Analysis for Research and Applications (MERRA), agree closely with previous estimates. The new estimate of ocean evaporation, not using the MERRA reanalysis data as model inputs, is lower than previous estimates, while the new estimate of ocean sensible heat flux is higher than previously reported. The MEP model also produces the first global map of ocean surface heat flux that is not available from existing global reanalysis products.
Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes
Li, Degui; Li, Runze
2016-01-01
In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894
Performance and state-space analyses of systems using Petri nets
NASA Technical Reports Server (NTRS)
Watson, James Francis, III
1992-01-01
The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.
Different methodologies to quantify uncertainties of air emissions.
Romano, Daniela; Bernetti, Antonella; De Lauretis, Riccardo
2004-10-01
Characterization of the uncertainty associated with air emission estimates is of critical importance especially in the compilation of air emission inventories. In this paper, two different theories are discussed and applied to evaluate air emissions uncertainty. In addition to numerical analysis, which is also recommended in the framework of the United Nation Convention on Climate Change guidelines with reference to Monte Carlo and Bootstrap simulation models, fuzzy analysis is also proposed. The methodologies are discussed and applied to an Italian example case study. Air concentration values are measured from two electric power plants: a coal plant, consisting of two boilers and a fuel oil plant, of four boilers; the pollutants considered are sulphur dioxide (SO(2)), nitrogen oxides (NO(X)), carbon monoxide (CO) and particulate matter (PM). Monte Carlo, Bootstrap and fuzzy methods have been applied to estimate uncertainty of these data. Regarding Monte Carlo, the most accurate results apply to Gaussian distributions; a good approximation is also observed for other distributions with almost regular features either positive asymmetrical or negative asymmetrical. Bootstrap, on the other hand, gives a good uncertainty estimation for irregular and asymmetrical distributions. The logic of fuzzy analysis, where data are represented as vague and indefinite in opposition to the traditional conception of neatness, certain classification and exactness of the data, follows a different description. In addition to randomness (stochastic variability) only, fuzzy theory deals with imprecision (vagueness) of data. Fuzzy variance of the data set was calculated; the results cannot be directly compared with empirical data but the overall performance of the theory is analysed. Fuzzy theory may appear more suitable for qualitative reasoning than for a quantitative estimation of uncertainty, but it suits well when little information and few measurements are available and when distributions of data are not properly known.
Exact hierarchical clustering in one dimension. [in universe
NASA Technical Reports Server (NTRS)
Williams, B. G.; Heavens, A. F.; Peacock, J. A.; Shandarin, S. F.
1991-01-01
The present adhesion model-based one-dimensional simulations of gravitational clustering have yielded bound-object catalogs applicable in tests of analytical approaches to cosmological structure formation. Attention is given to Press-Schechter (1974) type functions, as well as to their density peak-theory modifications and the two-point correlation function estimated from peak theory. The extent to which individual collapsed-object locations can be predicted by linear theory is significant only for objects of near-characteristic nonlinear mass.
Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution
2010-09-01
12 2.1.4 Optical Imaging as a Linear and Nonlinear System 15 2.1.5 Coherence Theory and Laser Light Statistics . . . 16 2.2 Deconvolution...rather than deconvolution. 2.1.5 Coherence Theory and Laser Light Statistics. Using [24] and [25], this section serves as background on coherence theory...the laser light incident on the detector surface. The image intensity related to different types of coherence is governed by the laser light’s spatial
Electromagnetic Dissociation Cross Sections using Weisskopf-Ewing Theory
NASA Technical Reports Server (NTRS)
Adamczyk, Anne M.; Norbury, John W.
2011-01-01
It is important that accurate estimates of crew exposure to radiation are obtained for future long-term space missions. Presently, several space radiation transport codes exist to predict the radiation environment, all of which take as input particle interaction cross sections that describe the nuclear interactions between the particles and the shielding material. The space radiation transport code HZETRN uses the nuclear fragmentation model NUCFRG2 to calculate Electromagnetic Dissociation (EMD) cross sections. Currently, NUCFRG2 employs energy independent branching ratios to calculate these cross sections. Using Weisskopf-Ewing (WE) theory to calculate branching ratios, however, is more advantageous than the method currently employed in NUCFRG2. The WE theory can calculate not only neutron and proton emission, as in the energy independent branching ratio formalism used in NUCFRG2, but also deuteron, triton, helion, and alpha particle emission. These particles can contribute significantly to total exposure estimates. In this work, photonuclear cross sections are calculated using WE theory and the energy independent branching ratios used in NUCFRG2 and then compared to experimental data. It is found that the WE theory gives comparable, but mainly better agreement with data than the energy independent branching ratio. Furthermore, EMD cross sections for single neutron, proton, and alpha particle removal are calculated using WE theory and an energy independent branching ratio used in NUCFRG2 and compared to experimental data.
Lensing-induced morphology changes in CMB temperature maps in modified gravity theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munshi, D.; Coles, P.; Hu, B.
2016-04-01
Lensing of the Cosmic Microwave Background (CMB) changes the morphology of pattern of temperature fluctuations, so topological descriptors such as Minkowski Functionals can probe the gravity model responsible for the lensing. We show how the recently introduced two-to-two and three-to-one kurt-spectra (and their associated correlation functions), which depend on the power spectrum of the lensing potential, can be used to probe modified gravity theories such as f ( R ) theories of gravity and quintessence models. We also investigate models based on effective field theory, which include the constant-Ω model, and low-energy Hořava theories. Estimates of the cumulative signal-to-noise formore » detection of lensing-induced morphology changes, reaches O(10{sup 3}) for the future planned CMB polarization mission COrE{sup +}. Assuming foreground removal is possible to ℓ{sub max}=3000, we show that many modified gravity theories can be rejected with a high level of significance, making this technique comparable in power to galaxy weak lensing or redshift surveys. These topological estimators are also useful in distinguishing lensing from other scattering secondaries at the level of the four-point function or trispectrum. Examples include the kinetic Sunyaev-Zel'dovich (kSZ) effect which shares, with lensing, a lack of spectral distortion. We also discuss the complication of foreground contamination from unsubtracted point sources.« less
Robust Fault Detection for Aircraft Using Mixed Structured Singular Value Theory and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G.
2000-01-01
The purpose of fault detection is to identify when a fault or failure has occurred in a system such as an aircraft or expendable launch vehicle. The faults may occur in sensors, actuators, structural components, etc. One of the primary approaches to model-based fault detection relies on analytical redundancy. That is the output of a computer-based model (actually a state estimator) is compared with the sensor measurements of the actual system to determine when a fault has occurred. Unfortunately, the state estimator is based on an idealized mathematical description of the underlying plant that is never totally accurate. As a result of these modeling errors, false alarms can occur. This research uses mixed structured singular value theory, a relatively recent and powerful robustness analysis tool, to develop robust estimators and demonstrates the use of these estimators in fault detection. To allow qualitative human experience to be effectively incorporated into the detection process fuzzy logic is used to predict the seriousness of the fault that has occurred.
The Influence of Mean Trophic Level on Biomass and Production in Marine Ecosystems
NASA Astrophysics Data System (ADS)
Woodson, C. B.; Schramski, J.
2016-02-01
The oceans have faced rapid removal of top predators causing a reduction in the mean trophic level of many marine ecosystems due to fishing down the food web. However, estimating the pre-exploitation biomass of the ocean has been difficult. Historical population sizes have been estimated using population dynamics models, archaeological or historical records, fisheries data, living memory, ecological monitoring data, genetics, and metabolic theory. In this talk, we expand on the use of metabolic theory by including complex trophic webs to estimate pre-exploitation levels of marine biomass. Our results suggest that historical marine biomass could be as much as 10 times higher than current estimates and that the total carrying capacity of the ocean is sensitive to mean trophic level and trophic web complexity. We further show that the production levels needed to support the added biomass are possible due to biomass accumulation and predator-prey overlap in regions such as fronts. These results have important implications for marine biogeochemical cycling, fisheries management, and conservation efforts.
NASA Astrophysics Data System (ADS)
Herdeiro, Victor
2017-09-01
Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] introduced a numerical recipe, dubbed uv sampler, offering precise estimations of the conformal field theory (CFT) data of the planar two-dimensional (2D) critical Ising model. It made use of scale invariance emerging at the critical point in order to sample finite sublattice marginals of the infinite plane Gibbs measure of the model by producing holographic boundary distributions. The main ingredient of the Markov chain Monte Carlo sampler is the invariance under dilation. This paper presents a generalization to higher dimensions with the critical 3D Ising model. This leads to numerical estimations of a subset of the CFT data—scaling weights and structure constants—through fitting of measured correlation functions. The results are shown to agree with the recent most precise estimations from numerical bootstrap methods [Kos, Poland, Simmons-Duffin, and Vichi, J. High Energy Phys. 08 (2016) 036, 10.1007/JHEP08(2016)036].
Improving Children’s Knowledge of Fraction Magnitudes
Fazio, Lisa K.; Kennedy, Casey A.; Siegler, Robert S.
2016-01-01
We examined whether playing a computerized fraction game, based on the integrated theory of numerical development and on the Common Core State Standards’ suggestions for teaching fractions, would improve children’s fraction magnitude understanding. Fourth and fifth-graders were given brief instruction about unit fractions and played Catch the Monster with Fractions, a game in which they estimated fraction locations on a number line and received feedback on the accuracy of their estimates. The intervention lasted less than 15 minutes. In our initial study, children showed large gains from pretest to posttest in their fraction number line estimates, magnitude comparisons, and recall accuracy. In a more rigorous second study, the experimental group showed similarly large improvements, whereas a control group showed no improvement from practicing fraction number line estimates without feedback. The results provide evidence for the effectiveness of interventions emphasizing fraction magnitudes and indicate how psychological theories and research can be used to evaluate specific recommendations of the Common Core State Standards. PMID:27768756
Control of AUVs using differential flatness theory and the derivative-free nonlinear Kalman Filter
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Raffo, Guilerme
2015-12-01
The paper proposes nonlinear control and filtering for Autonomous Underwater Vessels (AUVs) based on differential flatness theory and on the use of the Derivative-free nonlinear Kalman Filter. First, it is shown that the 6-DOF dynamic model of the AUV is a differentially flat one. This enables its transformation into the linear canonical (Brunovsky) form and facilitates the design of a state feedback controller. A problem that has to be dealt with is the uncertainty about the parameters of the AUV's dynamic model, as well the external perturbations which affect its motion. To cope with this, it is proposed to use a disturbance observer which is based on the Derivative-free nonlinear Kalman Filter. The considered filtering method consists of the standard Kalman Filter recursion applied on the linearized model of the vessel and of an inverse transformation based on differential flatness theory, which enables to obtain estimates of the state variables of the initial nonlinear model of the vessel. The Kalman Filter-based disturbance observer performs simultaneous estimation of the non-measurable state variables of the AUV and of the perturbation terms that affect its dynamics. By estimating such disturbances, their compensation is also succeeded through suitable modification of the feedback control input. The efficiency of the proposed AUV control and estimation scheme is confirmed through simulation experiments.
Propulsion of a fin whale (Balaenoptera physalus): why the fin whale is a fast swimmer.
Bose, N; Lien, J
1989-07-22
Measurements of an immature fin whale (Balaenoptera physalus), which died as a result of entrapment in fishing gear near Frenchmans Cove, Newfoundland (47 degrees 9' N, 55 degrees 25' W), were made to obtain estimates of volume and surface area of the animal. Detailed measurements of the flukes, both planform and sections, were also obtained. A strip theory was developed to calculate the hydrodynamic performance of the whale's flukes as an oscillating propeller. This method is based on linear, two-dimensional, small-amplitude, unsteady hydrofoil theory with correction factors used to account for the effects of finite span and finite amplitude motion. These correction factors were developed from theoretical results of large-amplitude heaving motion and unsteady lifting-surface theory. A model that makes an estimate of the effects of viscous flow on propeller performance was superimposed on the potential-flow results. This model estimates the drag of the hydrofoil sections by assuming that the drag is similar to that of a hydrofoil section in steady flow. The performance characteristics of the flukes of the fin whale were estimated by using this method. The effects of the different correction factors, and of the frictional drag of the fluke sections, are emphasized. Frictional effects in particular were found to reduce the hydrodynamic efficiency of the flukes significantly. The results are discussed and compared with the known characteristics of fin-whale swimming.
USDA-ARS?s Scientific Manuscript database
In this study density functional theory (DFT) was used to study the adsorption of guaiacol and its initial hydrodeoxygenation (HDO) reactions on Pt(111). Previously reported Brønsted–Evans–Polanyi (BEP) correlations for small open chain molecules are found to be inadequate in estimating the reaction...
Item Response Theory Modeling of the Philadelphia Naming Test
ERIC Educational Resources Information Center
Fergadiotis, Gerasimos; Kellough, Stacey; Hula, William D.
2015-01-01
Purpose: In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating…
Receiver-Coupling Schemes Based On Optimal-Estimation Theory
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1992-01-01
Two schemes for reception of weak radio signals conveying digital data via phase modulation provide for mutual coupling of multiple receivers, and coherent combination of outputs of receivers. In both schemes, optimal mutual-coupling weights computed according to Kalman-filter theory, but differ in manner of transmission and combination of outputs of receivers.
A Comparison of Linking and Concurrent Calibration under the Graded Response Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
Applications of item response theory to practical testing problems including equating, differential item functioning, and computerized adaptive testing, require that item parameter estimates be placed onto a common metric. In this study, two methods for developing a common metric for the graded response model under item response theory were…
ERIC Educational Resources Information Center
Zhang, Bo
2010-01-01
This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…
THREE-PEE SAMPLING THEORY and program 'THRP' for computer generation of selection criteria
L. R. Grosenbaugh
1965-01-01
Theory necessary for sampling with probability proportional to prediction ('three-pee,' or '3P,' sampling) is first developed and then exemplified by numerical comparisons of several estimators. Program 'T RP' for computer generation of appropriate 3P-sample-selection criteria is described, and convenient random integer dispensers are...
Neurodevelopmental Correlates of Theory of Mind in Preschool Children
ERIC Educational Resources Information Center
Sabbagh, Mark A.; Bowman, Lindsay C.; Evraire, Lyndsay E.; Ito, Jennie M. B.
2009-01-01
Baseline electroencephalogram (EEG) data were collected from twenty-nine 4-year-old children who also completed batteries of representational theory-of-mind (RTM) tasks and executive functioning (EF) tasks. Neural sources of children's EEG alpha (6-9 Hz) were estimated and analyzed to determine whether individual differences in regional EEG alpha…
Olivieri, Alejandro C
2005-08-01
Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.
Migration Systems in Europe: Evidence From Harmonized Flow Data
Kim, Keuntae; Raymer, James
2014-01-01
Empirical tests of migration systems theory require consistent and complete data on international migration flows. Publicly available data, however, represent an inconsistent and incomplete set of measurements obtained from a variety of national data collection systems. We overcome these obstacles by standardizing the available migration reports of sending and receiving countries in the European Union and Norway each year from 2003–2007 and by estimating the remaining missing flows. The resulting harmonized estimates are then used to test migration systems theory. First, locating thresholds in the size of flows over time, we identify three migration systems within the European Union and Norway. Second, examining the key determinants of flows with respect to the predictions of migration systems theory, our results highlight the importance of shared experiences of nation-state formation, geography, and accession status in the European Union. Our findings lend support to migration systems theory and demonstrate that knowledge of migration systems may improve the accuracy of migration forecasts toward managing the impacts of migration as a source of social change in Europe. PMID:22791267
The mean field theory in EM procedures for blind Markov random field image restoration.
Zhang, J
1993-01-01
A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.
Estimates on Functional Integrals of Quantum Mechanics and Non-relativistic Quantum Field Theory
NASA Astrophysics Data System (ADS)
Bley, Gonzalo A.; Thomas, Lawrence E.
2017-01-01
We provide a unified method for obtaining upper bounds for certain functional integrals appearing in quantum mechanics and non-relativistic quantum field theory, functionals of the form {E[{exp}(A_T)]} , the (effective) action {A_T} being a function of particle trajectories up to time T. The estimates in turn yield rigorous lower bounds for ground state energies, via the Feynman-Kac formula. The upper bounds are obtained by writing the action for these functional integrals in terms of stochastic integrals. The method is illustrated in familiar quantum mechanical settings: for the hydrogen atom, for a Schrödinger operator with {1/|x|^2} potential with small coupling, and, with a modest adaptation of the method, for the harmonic oscillator. We then present our principal applications of the method, in the settings of non-relativistic quantum field theories for particles moving in a quantized Bose field, including the optical polaron and Nelson models.
Bult, Johannes H F; van Putten, Bram; Schifferstein, Hendrik N J; Roozen, Jacques P; Voragen, Alphons G J; Kroeze, Jan H A
2004-10-01
In continuous vigilance tasks, the number of coincident panel responses to stimuli provides an index of stimulus detectability. To determine whether this number is due to chance, panel noise levels have been approximated by the maximum coincidence level obtained in stimulus-free conditions. This study proposes an alternative method by which to assess noise levels, derived from queuing system theory (QST). Instead of critical coincidence levels, QST modeling estimates the duration of coinciding responses in the absence of stimuli. The proposed method has the advantage over previous approaches that it yields more reliable noise estimates and allows for statistical testing. The method was applied in an olfactory detection experiment using 16 panelists in stimulus-present and stimulus-free conditions. We propose that QST may be used as an alternative to signal detection theory for analyzing data from continuous vigilance tasks.
Application of the Bernoulli enthalpy concept to the study of vortex noise and jet impingement noise
NASA Technical Reports Server (NTRS)
Yates, J. E.
1978-01-01
A complete theory of aeroacoustics of homentropic fluid media is developed and compared with previous theories. The theory is applied to study the interaction of sound with vortex flows, for the DC-9 in a standard take-off configuration. The maximum engine-wake interference noise is estimated to be 3 or 4 db in the ground plane. It is shown that the noise produced by a corotating vortex pair departs significantly from the compact M scaling law for eddy Mach numbers (M) greater than 0.1. An estimate of jet impingement noise is given that is in qualitative agreement with experimental results. The increased noise results primarily from the nonuniform acceleration of turbulent eddies through the stagnation point flow. It is shown that the corotating vortex pair can be excited or de-excited by an externally applied sound field. The model is used to qualitatively explain experimental results on excited jets.
Predicting Deformation Limits of Dual-Phase Steels Under Complex Loading Paths
Cheng, G.; Choi, K. S.; Hu, X.; ...
2017-04-05
Here in this study, the deformation limits of various DP980 steels are examined with the deformation instability theory. Under uniaxial tension, overall stress–strain curves of the material are estimated based on a simple rule of mixture (ROM) with both iso-strain and iso-stress assumptions. Under complex loading paths, an actual microstructure-based finite element (FE) method is used to resolve the deformation compatibilities explicitly between the soft ferrite and hard martensite phases. The results show that, for uniaxial tension, the deformation instability theory with iso-strain-based ROM can be used to provide the lower bound estimate of the uniform elongation (UE) for themore » various DP980 considered. Under complex loading paths, the deformation instability theory with microstructure-based FE method can be used in examining the effects of various microstructural features on the deformation limits of DP980 steels.« less
Predicting Deformation Limits of Dual-Phase Steels Under Complex Loading Paths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, G.; Choi, K. S.; Hu, X.
The deformation limits of various DP980 steels are examined in this study with deformation instability theory. Under uniaxial tension, overall stress-strain curves of the material are estimated based on simple rule of mixture (ROM) with both iso-strain and iso-stress assumptions. Under complex loading paths, actual microstructure-based finite element (FE) method is used to explicitly resolve the deformation incompatibilities between the soft ferrite and hard martensite phases. The results show that, for uniaxial tension, the deformation instability theory with iso-strain-based ROM can be used to provide the lower bound estimate of the uniform elongation (UE) for the various DP980 considered. Undermore » complex loading paths, the deformation instability theory with microstructure-based FE method can be used in examining the effects of various microstructural features on the deformation limits of DP980 steels.« less
Predicting Deformation Limits of Dual-Phase Steels Under Complex Loading Paths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, G.; Choi, K. S.; Hu, X.
Here in this study, the deformation limits of various DP980 steels are examined with the deformation instability theory. Under uniaxial tension, overall stress–strain curves of the material are estimated based on a simple rule of mixture (ROM) with both iso-strain and iso-stress assumptions. Under complex loading paths, an actual microstructure-based finite element (FE) method is used to resolve the deformation compatibilities explicitly between the soft ferrite and hard martensite phases. The results show that, for uniaxial tension, the deformation instability theory with iso-strain-based ROM can be used to provide the lower bound estimate of the uniform elongation (UE) for themore » various DP980 considered. Under complex loading paths, the deformation instability theory with microstructure-based FE method can be used in examining the effects of various microstructural features on the deformation limits of DP980 steels.« less
On convergence of the unscented Kalman-Bucy filter using contraction theory
NASA Astrophysics Data System (ADS)
Maree, J. P.; Imsland, L.; Jouffroy, J.
2016-06-01
Contraction theory entails a theoretical framework in which convergence of a nonlinear system can be analysed differentially in an appropriate contraction metric. This paper is concerned with utilising stochastic contraction theory to conclude on exponential convergence of the unscented Kalman-Bucy filter. The underlying process and measurement models of interest are Itô-type stochastic differential equations. In particular, statistical linearisation techniques are employed in a virtual-actual systems framework to establish deterministic contraction of the estimated expected mean of process values. Under mild conditions of bounded process noise, we extend the results on deterministic contraction to stochastic contraction of the estimated expected mean of the process state. It follows that for the regions of contraction, a result on convergence, and thereby incremental stability, is concluded for the unscented Kalman-Bucy filter. The theoretical concepts are illustrated in two case studies.
Refined Zigzag Theory for Laminated Composite and Sandwich Plates
NASA Technical Reports Server (NTRS)
Tessler, Alexander; DiSciuva, Marco; Gherlone, Marco
2009-01-01
A refined zigzag theory is presented for laminated-composite and sandwich plates that includes the kinematics of first-order shear deformation theory as its baseline. The theory is variationally consistent and is derived from the virtual work principle. Novel piecewise-linear zigzag functions that provide a more realistic representation of the deformation states of transverse-shear-flexible plates than other similar theories are used. The formulation does not enforce full continuity of the transverse shear stresses across the plate s thickness, yet is robust. Transverse-shear correction factors are not required to yield accurate results. The theory is devoid of the shortcomings inherent in the previous zigzag theories including shear-force inconsistency and difficulties in simulating clamped boundary conditions, which have greatly limited the accuracy of these theories. This new theory requires only C(sup 0)-continuous kinematic approximations and is perfectly suited for developing computationally efficient finite elements. The theory should be useful for obtaining relatively efficient, accurate estimates of structural response needed to design high-performance load-bearing aerospace structures.
A study of hypersonic small-disturbance theory
NASA Technical Reports Server (NTRS)
Van Dyke, Milton D
1954-01-01
A systematic study is made of the approximate inviscid theory of thin bodies moving at such high supersonic speeds that nonlinearity is an essential feature of the equations of flow. The first-order small-disturbance equations are derived for three-dimensional motions involving shock waves, and estimates are obtained for the order of error involved in the approximation. The hypersonic similarity rule of Tsien and Hayes, and Hayes' unsteady analogy appear in the course of the development. It is shown that the hypersonic theory can be interpreted so that it applies also in the range of linearized supersonic flow theory. Several examples are solved according to the small-disturbance theory, and compared with the full solutions when available.
Variance to mean ratio, R(t), for poisson processes on phylogenetic trees.
Goldman, N
1994-09-01
The ratio of expected variance to mean, R(t), of numbers of DNA base substitutions for contemporary sequences related by a "star" phylogeny is widely seen as a measure of the adherence of the sequences' evolution to a Poisson process with a molecular clock, as predicted by the "neutral theory" of molecular evolution under certain conditions. A number of estimators of R(t) have been proposed, all predicted to have mean 1 and distributions based on the chi 2. Various genes have previously been analyzed and found to have values of R(t) far in excess of 1, calling into question important aspects of the neutral theory. In this paper, I use Monte Carlo simulation to show that the previously suggested means and distributions of estimators of R(t) are highly inaccurate. The analysis is applied to star phylogenies and to general phylogenetic trees, and well-known gene sequences are reanalyzed. For star phylogenies the results show that Kimura's estimators ("The Neutral Theory of Molecular Evolution," Cambridge Univ. Press, Cambridge, 1983) are unsatisfactory for statistical testing of R(t), but confirm the accuracy of Bulmer's correction factor (Genetics 123: 615-619, 1989). For all three nonstar phylogenies studied, attained values of all three estimators of R(t), although larger than 1, are within their true confidence limits under simple Poisson process models. This shows that lineage effects can be responsible for high estimates of R(t), restoring some limited confidence in the molecular clock and showing that the distinction between lineage and molecular clock effects is vital.(ABSTRACT TRUNCATED AT 250 WORDS)
ERIC Educational Resources Information Center
St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane
2009-01-01
To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…
Quasi-Newton methods for parameter estimation in functional differential equations
NASA Technical Reports Server (NTRS)
Brewer, Dennis W.
1988-01-01
A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2008-01-01
The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…
NASA Astrophysics Data System (ADS)
Guo, Xinwei; Qu, Zexing; Gao, Jiali
2018-01-01
The multi-state density functional theory (MSDFT) provides a convenient way to estimate electronic coupling of charge transfer processes based on a diabatic representation. Its performance has been benchmarked against the HAB11 database with a mean unsigned error (MUE) of 17 meV between MSDFT and ab initio methods. The small difference may be attributed to different representations, diabatic from MSDFT and adiabatic from ab initio calculations. In this discussion, we conclude that MSDFT provides a general and efficient way to estimate the electronic coupling for charge-transfer rate calculations based on the Marcus-Hush model.
1983-01-01
disturbance theory . The main feature of the method is the use of measured pressure along lines in the flow direction near the tunnel walls. This method...disturbance theory , then $can be written ( , = qo( , ) .@ (:. S-in(.t + 0.( or s CO (8) Defining cw as co S . ^(9) gives Sin= C, f(4,.) + OCr,z)co.s(0t...AUTHOR (S)/ AUTEUR (S) H. Sawada, visiting scientist 2nd Aerodynamics Division, National Aerospace Laboratory, Japan SERIES/SERIE Aeronautical Note 6
Enhanced peculiar velocities in brane-induced gravity
NASA Astrophysics Data System (ADS)
Wyman, Mark; Khoury, Justin
2010-08-01
The mounting evidence for anomalously large peculiar velocities in our Universe presents a challenge for the ΛCDM paradigm. The recent estimates of the large-scale bulk flow by Watkins et al. are inconsistent at the nearly 3σ level with ΛCDM predictions. Meanwhile, Lee and Komatsu have recently estimated that the occurrence of high-velocity merging systems such as the bullet cluster (1E0657-57) is unlikely at a 6.5-5.8σ level, with an estimated probability between 3.3×10-11 and 3.6×10-9 in ΛCDM cosmology. We show that these anomalies are alleviated in a broad class of infrared-modifed gravity theories, called brane-induced gravity, in which gravity becomes higher-dimensional at ultralarge distances. These theories include additional scalar forces that enhance gravitational attraction and therefore speed up structure formation at late times and on sufficiently large scales. The peculiar velocities are enhanced by 24-34% compared to standard gravity, with the maximal enhancement nearly consistent at the 2σ level with bulk flow observations. The occurrence of the bullet cluster in these theories is ≈104 times more probable than in ΛCDM cosmology.
Enhanced peculiar velocities in brane-induced gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wyman, Mark; Khoury, Justin
The mounting evidence for anomalously large peculiar velocities in our Universe presents a challenge for the {Lambda}CDM paradigm. The recent estimates of the large-scale bulk flow by Watkins et al. are inconsistent at the nearly 3{sigma} level with {Lambda}CDM predictions. Meanwhile, Lee and Komatsu have recently estimated that the occurrence of high-velocity merging systems such as the bullet cluster (1E0657-57) is unlikely at a 6.5-5.8{sigma} level, with an estimated probability between 3.3x10{sup -11} and 3.6x10{sup -9} in {Lambda}CDM cosmology. We show that these anomalies are alleviated in a broad class of infrared-modifed gravity theories, called brane-induced gravity, in which gravitymore » becomes higher-dimensional at ultralarge distances. These theories include additional scalar forces that enhance gravitational attraction and therefore speed up structure formation at late times and on sufficiently large scales. The peculiar velocities are enhanced by 24-34% compared to standard gravity, with the maximal enhancement nearly consistent at the 2{sigma} level with bulk flow observations. The occurrence of the bullet cluster in these theories is {approx_equal}10{sup 4} times more probable than in {Lambda}CDM cosmology.« less
Measuring the economic value of wildlife: a caution
T. H. Stevens
1992-01-01
Wildlife values appear to be very sensitive to whether species are evaluated separately or together, and value estimates often seem inconsistent with neoclassical economic theory. Wildlife value estimates must therefore be used with caution. Additional research about the nature of individual value structures for wildlife is needed.
Bayesian Methods for Effective Field Theories
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah
Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that incorporate the prior pdfs. Problems of model selection, such as distinguishing between competing EFT implementations, are also natural in a Bayesian framework. In this thesis we focus on two complementary topics for EFT UQ using Bayesian methods--quantifying EFT truncation uncertainty and parameter estimation for LECs. Using the order-by-order calculations and underlying EFT constraints as prior information, we show how to estimate EFT truncation uncertainties. We then apply the result to calculating truncation uncertainties on predictions of nucleon-nucleon scattering in chiral effective field theory. We apply model-checking diagnostics to our calculations to ensure that the statistical model of truncation uncertainty produces consistent results. A framework for EFT parameter estimation based on EFT convergence properties and naturalness is developed which includes a series of diagnostics to ensure the extraction of the maximum amount of available information from data to estimate LECs with minimal bias. We develop this framework using model EFTs and apply it to the problem of extrapolating lattice quantum chromodynamics results for the nucleon mass. We then apply aspects of the parameter estimation framework to perform case studies in chiral EFT parameter estimation, investigating a possible operator redundancy at fourth order in the chiral expansion and the appropriate inclusion of truncation uncertainty in estimating LECs.
Robust estimation of microbial diversity in theory and in practice
Haegeman, Bart; Hamelin, Jérôme; Moriarty, John; Neal, Peter; Dushoff, Jonathan; Weitz, Joshua S
2013-01-01
Quantifying diversity is of central importance for the study of structure, function and evolution of microbial communities. The estimation of microbial diversity has received renewed attention with the advent of large-scale metagenomic studies. Here, we consider what the diversity observed in a sample tells us about the diversity of the community being sampled. First, we argue that one cannot reliably estimate the absolute and relative number of microbial species present in a community without making unsupported assumptions about species abundance distributions. The reason for this is that sample data do not contain information about the number of rare species in the tail of species abundance distributions. We illustrate the difficulty in comparing species richness estimates by applying Chao's estimator of species richness to a set of in silico communities: they are ranked incorrectly in the presence of large numbers of rare species. Next, we extend our analysis to a general family of diversity metrics (‘Hill diversities'), and construct lower and upper estimates of diversity values consistent with the sample data. The theory generalizes Chao's estimator, which we retrieve as the lower estimate of species richness. We show that Shannon and Simpson diversity can be robustly estimated for the in silico communities. We analyze nine metagenomic data sets from a wide range of environments, and show that our findings are relevant for empirically-sampled communities. Hence, we recommend the use of Shannon and Simpson diversity rather than species richness in efforts to quantify and compare microbial diversity. PMID:23407313
Extending Theory-Based Quantitative Predictions to New Health Behaviors.
Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O
2016-04-01
Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.
Automatic Trading Agent. RMT Based Portfolio Theory and Portfolio Selection
NASA Astrophysics Data System (ADS)
Snarska, M.; Krzych, J.
2006-11-01
Portfolio theory is a very powerful tool in the modern investment theory. It is helpful in estimating risk of an investor's portfolio, arosen from lack of information, uncertainty and incomplete knowledge of reality, which forbids a perfect prediction of future price changes. Despite of many advantages this tool is not known and not widely used among investors on Warsaw Stock Exchange. The main reason for abandoning this method is a high level of complexity and immense calculations. The aim of this paper is to introduce an automatic decision-making system, which allows a single investor to use complex methods of Modern Portfolio Theory (MPT). The key tool in MPT is an analysis of an empirical covariance matrix. This matrix, obtained from historical data, biased by such a high amount of statistical uncertainty, that it can be seen as random. By bringing into practice the ideas of Random Matrix Theory (RMT), the noise is removed or significantly reduced, so the future risk and return are better estimated and controlled. These concepts are applied to the Warsaw Stock Exchange Simulator {http://gra.onet.pl}. The result of the simulation is 18% level of gains in comparison with respective 10% loss of the Warsaw Stock Exchange main index WIG.
Nonlinear system theory: another look at dependence.
Wu, Wei Biao
2005-10-04
Based on the nonlinear system theory, we introduce previously undescribed dependence measures for stationary causal processes. Our physical and predictive dependence measures quantify the degree of dependence of outputs on inputs in physical systems. The proposed dependence measures provide a natural framework for a limit theory for stationary processes. In particular, under conditions with quite simple forms, we present limit theorems for partial sums, empirical processes, and kernel density estimates. The conditions are mild and easily verifiable because they are directly related to the data-generating mechanisms.
Brassey, Charlotte A.; Margetts, Lee; Kitchener, Andrew C.; Withers, Philip J.; Manning, Phillip L.; Sellers, William I.
2013-01-01
Classic beam theory is frequently used in biomechanics to model the stress behaviour of vertebrate long bones, particularly when creating intraspecific scaling models. Although methodologically straightforward, classic beam theory requires complex irregular bones to be approximated as slender beams, and the errors associated with simplifying complex organic structures to such an extent are unknown. Alternative approaches, such as finite element analysis (FEA), while much more time-consuming to perform, require no such assumptions. This study compares the results obtained using classic beam theory with those from FEA to quantify the beam theory errors and to provide recommendations about when a full FEA is essential for reasonable biomechanical predictions. High-resolution computed tomographic scans of eight vertebrate long bones were used to calculate diaphyseal stress owing to various loading regimes. Under compression, FEA values of minimum principal stress (σmin) were on average 142 per cent (±28% s.e.) larger than those predicted by beam theory, with deviation between the two models correlated to shaft curvature (two-tailed p = 0.03, r2 = 0.56). Under bending, FEA values of maximum principal stress (σmax) and beam theory values differed on average by 12 per cent (±4% s.e.), with deviation between the models significantly correlated to cross-sectional asymmetry at midshaft (two-tailed p = 0.02, r2 = 0.62). In torsion, assuming maximum stress values occurred at the location of minimum cortical thickness brought beam theory and FEA values closest in line, and in this case FEA values of τtorsion were on average 14 per cent (±5% s.e.) higher than beam theory. Therefore, FEA is the preferred modelling solution when estimates of absolute diaphyseal stress are required, although values calculated by beam theory for bending may be acceptable in some situations. PMID:23173199
Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Adamian, A.
1988-01-01
An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.
Computational material design for Q&P steels with plastic instability theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, G.; Choi, K. S.; Hu, X. H.
In this paper, the deformation limits of Quenching and Partitioning (Q&P) steels are examined with the plastic instability theory. For this purpose, the constituent phase properties of various Q&P steels were first experimentally obtained, and used to estimate the overall tensile stress-strain curves based on the simple rule of mixture (ROM) with the iso-strain and iso-stress assumptions. Plastic instability theory was then applied to the obtained overall stress-strain curves in order to estimate the deformation limits of the Q&P steels. A parametric study was also performed to examine the effects of various material parameters on the deformation limits of Q&Pmore » steels. Computational material design was subsequently carried out based on the information obtained from the parametric study. The results show that the plastic instability theory with iso-stress-based stress-strain curve may be used to provide the lower bound estimate of the uniform elongation (UE) for the various Q&P steels considered. The results also indicate that higher austenite stability/volume fractions, less strength difference between the primary phases, higher hardening exponents of the constituent phases are generally beneficial for the performance improvement of Q&P steels, and that various material parameters may be concurrently adjusted in a cohesive way in order to improve the performance of Q&P steel. The information from this study may be used to devise new heat treatment parameters and alloying elements to produce Q&P steels with the improved performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stegen, James C.; Hurlbert, Allen H.; Bond-Lamberty, Ben
The number of microbial operational taxonomic units (OTUs) within a community is akin to species richness within plant/animal (‘macrobial’) systems. A large literature documents OTU richness patterns, drawing comparisons to macrobial theory. There is, however, an unrecognized fundamental disconnect between OTU richness and macrobial theory: OTU richness is commonly estimated on a per-individual basis, while macrobial richness is estimated per-area. Furthermore, the range or extent of sampled environmental conditions can strongly influence a study’s outcomes and conclusions, but this is not commonly addressed when studying OTU richness. Here we (i) propose a new sampling approach that estimates OTU richness per-massmore » of soil, which results in strong support for species energy theory, (ii) use data reduction to show how support for niche conservatism emerges when sampling across a restricted range of environmental conditions, and (iii) show how additional insights into drivers of OTU richness can be generated by combining different sampling methods while simultaneously considering patterns that emerge by restricting the range of environmental conditions. We propose that a more rigorous connection between microbial ecology and macrobial theory can be facilitated by exploring how changes in OTU richness units and environmental extent influence outcomes of data analysis. While fundamental differences between microbial and macrobial systems persist (e.g., species concepts), we suggest that closer attention to units and scale provide tangible and immediate improvements to our understanding of the processes governing OTU richness and how those processes relate to drivers of macrobial species richness.« less
A maximum pseudo-profile likelihood estimator for the Cox model under length-biased sampling
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2012-01-01
This paper considers semiparametric estimation of the Cox proportional hazards model for right-censored and length-biased data arising from prevalent sampling. To exploit the special structure of length-biased sampling, we propose a maximum pseudo-profile likelihood estimator, which can handle time-dependent covariates and is consistent under covariate-dependent censoring. Simulation studies show that the proposed estimator is more efficient than its competitors. A data analysis illustrates the methods and theory. PMID:23843659
PROTOCOL - A COMPUTERIZED SOLID WASTE QUANTITY AND COMPOSITION ESTIMATION SYSTEM: OPERATIONAL MANUAL
The assumptions of traditional sampling theory often do not fit the circumstances when estimating the quantity and composition of solid waste arriving at a given location, such as a landfill site, or at a specific point in an industrial or commercial process. The investigator oft...
IRT-Estimated Reliability for Tests Containing Mixed Item Formats
ERIC Educational Resources Information Center
Shu, Lianghua; Schwarz, Richard D.
2014-01-01
As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…
Monte Carlo Approach for Reliability Estimations in Generalizability Studies.
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…
Estimation and Compression over Large Alphabets
ERIC Educational Resources Information Center
Acharya, Jayadev
2014-01-01
Compression, estimation, and prediction are basic problems in Information theory, statistics and machine learning. These problems have been extensively studied in all these fields, though the primary focus in a large portion of the work has been on understanding and solving the problems in the asymptotic regime, "i.e." the alphabet size…
Invariance Properties for General Diagnostic Classification Models
ERIC Educational Resources Information Center
Bradshaw, Laine P.; Madison, Matthew J.
2016-01-01
In item response theory (IRT), the invariance property states that item parameter estimates are independent of the examinee sample, and examinee ability estimates are independent of the test items. While this property has long been established and understood by the measurement community for IRT models, the same cannot be said for diagnostic…
2008-12-01
between our current project and the historical projects. Therefore to refine the historical volatility estimate of the previously completed software... historical volatility estimates obtained in the form of beliefs and plausibility based on subjective probabilities that take into consideration unique
Unidimensional Interpretations for Multidimensional Test Items
ERIC Educational Resources Information Center
Kahraman, Nilufer
2013-01-01
This article considers potential problems that can arise in estimating a unidimensional item response theory (IRT) model when some test items are multidimensional (i.e., show a complex factorial structure). More specifically, this study examines (1) the consequences of model misfit on IRT item parameter estimates due to unintended minor item-level…
Spectral Rate Theory for Two-State Kinetics
NASA Astrophysics Data System (ADS)
Prinz, Jan-Hendrik; Chodera, John D.; Noé, Frank
2014-02-01
Classical rate theories often fail in cases where the observable(s) or order parameter(s) used is a poor reaction coordinate or the observed signal is deteriorated by noise, such that no clear separation between reactants and products is possible. Here, we present a general spectral two-state rate theory for ergodic dynamical systems in thermal equilibrium that explicitly takes into account how the system is observed. The theory allows the systematic estimation errors made by standard rate theories to be understood and quantified. We also elucidate the connection of spectral rate theory with the popular Markov state modeling approach for molecular simulation studies. An optimal rate estimator is formulated that gives robust and unbiased results even for poor reaction coordinates and can be applied to both computer simulations and single-molecule experiments. No definition of a dividing surface is required. Another result of the theory is a model-free definition of the reaction coordinate quality. The reaction coordinate quality can be bounded from below by the directly computable observation quality, thus providing a measure allowing the reaction coordinate quality to be optimized by tuning the experimental setup. Additionally, the respective partial probability distributions can be obtained for the reactant and product states along the observed order parameter, even when these strongly overlap. The effects of both filtering (averaging) and uncorrelated noise are also examined. The approach is demonstrated on numerical examples and experimental single-molecule force-probe data of the p5ab RNA hairpin and the apo-myoglobin protein at low pH, focusing here on the case of two-state kinetics.
Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models
ERIC Educational Resources Information Center
Doebler, Anna; Doebler, Philipp; Holling, Heinz
2013-01-01
The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…
Estimation of Danger and Endorsement of Self-Protection Strategies: A Study of Locus of Control.
ERIC Educational Resources Information Center
Heath, Linda; And Others
Contradictory predictions concerning control over negative events exist in Walster's self-protective attribution theory which maintains that on-lookers in negative situations are apt to seek control by convincing themselves that such a situation couldn't happen to them, while Shaver's defensive attribution theory suggests that in a comparable…
ERIC Educational Resources Information Center
Kornilova, Tatiana V.; Kornilov, Sergey A.; Chumakova, Maria A.
2009-01-01
The study examined the relationship between implicit theories, goal orientations, subjective and test estimates of intelligence, academic self-concept, and achievement in a selective student population (N=300). There was no direct impact of implicit theories of intelligence and goal orientations on achievement. However, subjective evaluations of…
ERIC Educational Resources Information Center
Cahan, Sorel; Mor, Yaniv
2007-01-01
Narrow Window theory, suggested by Y. Kareev ten years ago, has so far focused on one central implication of the limited capacity of working memory on intuitive correlation estimation, namely, overestimation of the distal population correlation. This paper points to additional and perhaps more dramatic implications due to the large dispersion of…
Rasch Measurement and Item Banking: Theory and Practice.
ERIC Educational Resources Information Center
Nakamura, Yuji
The Rasch Model is an item response theory, one parameter model developed that states that the probability of a correct response on a test is a function of the difficulty of the item and the ability of the candidate. Item banking is useful for language testing. The Rasch Model provides estimates of item difficulties that are meaningful,…
ERIC Educational Resources Information Center
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet
2012-01-01
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
NASA Technical Reports Server (NTRS)
Geering, H. P.; Athans, M.
1973-01-01
A complete theory of necessary and sufficient conditions is discussed for a control to be superior with respect to a nonscalar-valued performance criterion. The latter maps into a finite dimensional, integrally closed directed, partially ordered linear space. The applicability of the theory to the analysis of dynamic vector estimation problems and to a class of uncertain optimal control problems is demonstrated.
ERIC Educational Resources Information Center
Erwin, T. Dary
Rating scales are a typical method for evaluating a student's performance in outcomes assessment. The analysis of the quality of information from rating scales poses special measurement problems when researchers work with faculty in their development. Generalizability measurement theory offers a set of techniques for estimating errors or…
Model Choice and Sample Size in Item Response Theory Analysis of Aphasia Tests
ERIC Educational Resources Information Center
Hula, William D.; Fergadiotis, Gerasimos; Martin, Nadine
2012-01-01
Purpose: The purpose of this study was to identify the most appropriate item response theory (IRT) measurement model for aphasia tests requiring 2-choice responses and to determine whether small samples are adequate for estimating such models. Method: Pyramids and Palm Trees (Howard & Patterson, 1992) test data that had been collected from…
Charles H. Luce; Daniele Tonina; Frank Gariglio; Ralph Applebee
2013-01-01
Work over the last decade has documented methods for estimating fluxes between streams and streambeds from time series of temperature at two depths in the streambed. We present substantial extension to the existing theory and practice of using temperature time series to estimate streambed water fluxes and thermal properties, including (1) a new explicit analytical...
An alternative to FASTSIM for tangential solution of the wheel-rail contact
NASA Astrophysics Data System (ADS)
Sichani, Matin Sh.; Enblom, Roger; Berg, Mats
2016-06-01
In most rail vehicle dynamics simulation packages, tangential solution of the wheel-rail contact is gained by means of Kalker's FASTSIM algorithm. While 5-25% error is expected for creep force estimation, the errors of shear stress distribution, needed for wheel-rail damage analysis, may rise above 30% due to the parabolic traction bound. Therefore, a novel algorithm named FaStrip is proposed as an alternative to FASTSIM. It is based on the strip theory which extends the two-dimensional rolling contact solution to three-dimensional contacts. To form FaStrip, the original strip theory is amended to obtain accurate estimations for any contact ellipse size and it is combined by a numerical algorithm to handle spin. The comparison between the two algorithms shows that using FaStrip improves the accuracy of the estimated shear stress distribution and the creep force estimation in all studied cases. In combined lateral creepage and spin cases, for instance, the error in force estimation reduces from 18% to less than 2%. The estimation of the slip velocities in the slip zone, needed for wear analysis, is also studied. Since FaStrip is as fast as FASTSIM, it can be an alternative for tangential solution of the wheel-rail contact in simulation packages.
Agrillo, Christian; Piffer, Laura; Adriano, Andrea
2013-07-01
A significant debate surrounds the nature of the cognitive mechanisms involved in non-symbolic number estimation. Several studies have suggested the existence of the same cognitive system for estimation of time, space, and number, called "a theory of magnitude" (ATOM). In addition, researchers have proposed the theory that non-symbolic number abilities might support our mathematical skills. Despite the large number of studies carried out, no firm conclusions can be drawn on either topic. In the present study, we correlated the performance of adults on non-symbolic magnitude estimations and symbolic numerical tasks. Non-symbolic magnitude abilities were assessed by asking participants to estimate which auditory tone lasted longer (time), which line was longer (space), and which group of dots was more numerous (number). To assess symbolic numerical abilities, participants were required to perform mental calculations and mathematical reasoning. We found a positive correlation between non-symbolic and symbolic numerical abilities. On the other hand, no correlation was found among non-symbolic estimations of time, space, and number. Our study supports the idea that mathematical abilities rely on rudimentary numerical skills that predate verbal language. By contrast, the lack of correlation among non-symbolic estimations of time, space, and number is incompatible with the idea that these magnitudes are entirely processed by the same cognitive system.
Bayesian parameter estimation for chiral effective field theory
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie
2016-09-01
The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.
NASA Technical Reports Server (NTRS)
Jones, D. W.
1971-01-01
The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.
Overarching framework for data-based modelling
NASA Astrophysics Data System (ADS)
Schelter, Björn; Mader, Malenka; Mader, Wolfgang; Sommerlade, Linda; Platt, Bettina; Lai, Ying-Cheng; Grebogi, Celso; Thiel, Marco
2014-02-01
One of the main modelling paradigms for complex physical systems are networks. When estimating the network structure from measured signals, typically several assumptions such as stationarity are made in the estimation process. Violating these assumptions renders standard analysis techniques fruitless. We here propose a framework to estimate the network structure from measurements of arbitrary non-linear, non-stationary, stochastic processes. To this end, we propose a rigorous mathematical theory that underlies this framework. Based on this theory, we present a highly efficient algorithm and the corresponding statistics that are immediately sensibly applicable to measured signals. We demonstrate its performance in a simulation study. In experiments of transitions between vigilance stages in rodents, we infer small network structures with complex, time-dependent interactions; this suggests biomarkers for such transitions, the key to understand and diagnose numerous diseases such as dementia. We argue that the suggested framework combines features that other approaches followed so far lack.
Correlation dimension and phase space contraction via extreme value theory
NASA Astrophysics Data System (ADS)
Faranda, Davide; Vaienti, Sandro
2018-04-01
We show how to obtain theoretical and numerical estimates of correlation dimension and phase space contraction by using the extreme value theory. The maxima of suitable observables sampled along the trajectory of a chaotic dynamical system converge asymptotically to classical extreme value laws where: (i) the inverse of the scale parameter gives the correlation dimension and (ii) the extremal index is associated with the rate of phase space contraction for backward iteration, which in dimension 1 and 2, is closely related to the positive Lyapunov exponent and in higher dimensions is related to the metric entropy. We call it the Dynamical Extremal Index. Numerical estimates are straightforward to obtain as they imply just a simple fit to a univariate distribution. Numerical tests range from low dimensional maps, to generalized Henon maps and climate data. The estimates of the indicators are particularly robust even with relatively short time series.
Ridge Regression Signal Processing
NASA Technical Reports Server (NTRS)
Kuhl, Mark R.
1990-01-01
The introduction of the Global Positioning System (GPS) into the National Airspace System (NAS) necessitates the development of Receiver Autonomous Integrity Monitoring (RAIM) techniques. In order to guarantee a certain level of integrity, a thorough understanding of modern estimation techniques applied to navigational problems is required. The extended Kalman filter (EKF) is derived and analyzed under poor geometry conditions. It was found that the performance of the EKF is difficult to predict, since the EKF is designed for a Gaussian environment. A novel approach is implemented which incorporates ridge regression to explain the behavior of an EKF in the presence of dynamics under poor geometry conditions. The basic principles of ridge regression theory are presented, followed by the derivation of a linearized recursive ridge estimator. Computer simulations are performed to confirm the underlying theory and to provide a comparative analysis of the EKF and the recursive ridge estimator.
Neural field theory of perceptual echo and implications for estimating brain connectivity
NASA Astrophysics Data System (ADS)
Robinson, P. A.; Pagès, J. C.; Gabay, N. C.; Babaie, T.; Mukta, K. N.
2018-04-01
Neural field theory is used to predict and analyze the phenomenon of perceptual echo in which random input stimuli at one location are correlated with electroencephalographic responses at other locations. It is shown that this echo correlation (EC) yields an estimate of the transfer function from the stimulated point to other locations. Modal analysis then explains the observed spatiotemporal structure of visually driven EC and the dominance of the alpha frequency; two eigenmodes of similar amplitude dominate the response, leading to temporal beating and a line of low correlation that runs from the crown of the head toward the ears. These effects result from mode splitting and symmetry breaking caused by interhemispheric coupling and cortical folding. It is shown how eigenmodes obtained from functional magnetic resonance imaging experiments can be combined with temporal dynamics from EC or other evoked responses to estimate the spatiotemporal transfer function between any two points and hence their effective connectivity.
Connectivity modeling and graph theory analysis predict recolonization in transient populations
NASA Astrophysics Data System (ADS)
Rognstad, Rhiannon L.; Wethey, David S.; Oliver, Hilde; Hilbish, Thomas J.
2018-07-01
Population connectivity plays a major role in the ecology and evolution of marine organisms. In these systems, connectivity of many species occurs primarily during a larval stage, when larvae are frequently too small and numerous to track directly. To indirectly estimate larval dispersal, ocean circulation models have emerged as a popular technique. Here we use regional ocean circulation models to estimate dispersal of the intertidal barnacle Semibalanus balanoides at its local distribution limit in Southwest England. We incorporate historical and recent repatriation events to provide support for our modeled dispersal estimates, which predict a recolonization rate similar to that observed in two recolonization events. Using graph theory techniques to describe the dispersal landscape, we identify likely physical barriers to dispersal in the region. Our results demonstrate the use of recolonization data to support dispersal models and how these models can be used to describe population connectivity.
Higgsing the stringy higher spin symmetry
Gaberdiel, Matthias R.; Peng, Cheng; Zadeh, Ida G.
2015-10-01
It has recently been argued that the symmetric orbifold theory of T 4 is dual to string theory on AdS 3 × S 3 × T 4 at the tensionless point. At this point in moduli space, the theory possesses a very large symmetry algebra that includes, in particular, a W ∞ algebra capturing the gauge fields of a dual higher spin theory. Using conformal perturbation theory, we study the behaviour of the symmetry generators of the symmetric orbifold theory under the deformation that corresponds to switching on the string tension. We show that the generators fall nicely into Reggemore » trajectories, with the higher spin fields corresponding to the leading Regge trajectory. We also estimate the form of the Regge trajectories for large spin, and find evidence for the familiar logarithmic behaviour, thereby suggesting that the symmetric orbifold theory is dual to an AdS background with pure RR flux.« less
Deterministic theory of Monte Carlo variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueki, T.; Larsen, E.W.
1996-12-31
The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less
Accoustic waveform logging--Advances in theory and application
Paillet, F.L.; Cheng, C.H.; Pennington , W.D.
1992-01-01
Full-waveform acoustic logging has made significant advances in both theory and application in recent years, and these advances have greatly increased the capability of log analysts to measure the physical properties of formations. Advances in theory provide the analytical tools required to understand the properties of measured seismic waves, and to relate those properties to such quantities as shear and compressional velocity and attenuation, and primary and fracture porosity and permeability of potential reservoir rocks. The theory demonstrates that all parts of recorded waveforms are related to various modes of propagation, even in the case of dipole and quadrupole source logging. However, the theory also indicates that these mode properties can be used to design velocity and attenuation picking schemes, and shows how source frequency spectra can be selected to optimize results in specific applications. Synthetic microseismogram computations are an effective tool in waveform interpretation theory; they demonstrate how shear arrival picks and mode attenuation can be used to compute shear velocity and intrinsic attenuation, and formation permeability for monopole, dipole and quadrupole sources. Array processing of multi-receiver data offers the opportunity to apply even more sophisticated analysis techniques. Synthetic microseismogram data is used to illustrate the application of the maximum-likelihood method, semblance cross-correlation, and Prony's method analysis techniques to determine seismic velocities and attenuations. The interpretation of acoustic waveform logs is illustrated by reviews of various practical applications, including synthetic seismogram generation, lithology determination, estimation of geomechanical properties in situ, permeability estimation, and design of hydraulic fracture operations.
Mökkönen, Harri; Ala-Nissila, Tapio; Jónsson, Hannes
2016-09-07
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates is found. The use of a sequence of hyperplanes in the evaluation of the recrossing correction speeds up the calculation by an order of magnitude as compared with the traditional approach. As the temperature is lowered, the direct Langevin dynamics simulations as well as the forward flux simulations become computationally too demanding, while the harmonic transition state theory estimate corrected for recrossings can be calculated without significant increase in the computational effort.
Error estimates for (semi-)empirical dispersion terms and large biomacromolecules.
Korth, Martin
2013-10-14
The first-principles modeling of biomaterials has made tremendous advances over the last few years with the ongoing growth of computing power and impressive developments in the application of density functional theory (DFT) codes to large systems. One important step forward was the development of dispersion corrections for DFT methods, which account for the otherwise neglected dispersive van der Waals (vdW) interactions. Approaches at different levels of theory exist, with the most often used (semi-)empirical ones based on pair-wise interatomic C6R(-6) terms. Similar terms are now also used in connection with semiempirical QM (SQM) methods and density functional tight binding methods (SCC-DFTB). Their basic structure equals the attractive term in Lennard-Jones potentials, common to most force field approaches, but they usually use some type of cutoff function to make the mixing of the (long-range) dispersion term with the already existing (short-range) dispersion and exchange-repulsion effects from the electronic structure theory methods possible. All these dispersion approximations were found to perform accurately for smaller systems, but error estimates for larger systems are very rare and completely missing for really large biomolecules. We derive such estimates for the dispersion terms of DFT, SQM and MM methods using error statistics for smaller systems and dispersion contribution estimates for the PDBbind database of protein-ligand interactions. We find that dispersion terms will usually not be a limiting factor for reaching chemical accuracy, though some force fields and large ligand sizes are problematic.
Model error estimation for distributed systems described by elliptic equations
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1983-01-01
A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.
Optimal estimation of the optomechanical coupling strength
NASA Astrophysics Data System (ADS)
Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André
2018-06-01
We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.
Thermal radiative properties: Nonmetallic solids.
NASA Technical Reports Server (NTRS)
Touloukian, Y. S.; Dewitt, D. P.
1972-01-01
The volume consists of a text on theory, estimation, and measurement, together with its bibliography, the main body of numerical data and its references, and the material index. The text material assumes a role complementary to the main body of numerical data. The physics and basic concepts of thermal radiation are discussed in detail, focusing attention on treatment of nonmetallic materials: theory, estimation, and methods of measurement. Numerical data is presented in a comprehensive manner. The scope of coverage includes the nonmetallic elements and their compounds, intermetallics, polymers, glasses, and minerals. Analyzed data graphs provide an evaluative review of the data. All data have been obtained from their original sources, and each data set is so referenced.
Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.
Ellner, Stephen P; Holmes, Elizabeth E
2008-08-01
We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.
Construction Theory and Noise Analysis Method of Global CGCS2000 Coordinate Frame
NASA Astrophysics Data System (ADS)
Jiang, Z.; Wang, F.; Bai, J.; Li, Z.
2018-04-01
The definition, renewal and maintenance of geodetic datum has been international hot issue. In recent years, many countries have been studying and implementing modernization and renewal of local geodetic reference coordinate frame. Based on the precise result of continuous observation for recent 15 years from state CORS (continuously operating reference system) network and the mainland GNSS (Global Navigation Satellite System) network between 1999 and 2007, this paper studies the construction of mathematical model of the Global CGCS2000 frame, mainly analyzes the theory and algorithm of two-step method for Global CGCS2000 Coordinate Frame formulation. Finally, the noise characteristic of the coordinate time series are estimated quantitatively with the criterion of maximum likelihood estimation.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
Practical Issues in Estimating Classification Accuracy and Consistency with R Package cacIRT
ERIC Educational Resources Information Center
Lathrop, Quinn N.
2015-01-01
There are two main lines of research in estimating classification accuracy (CA) and classification consistency (CC) under Item Response Theory (IRT). The R package cacIRT provides computer implementations of both approaches in an accessible and unified framework. Even with available implementations, there remains decisions a researcher faces when…
Influences on and Limitations of Classical Test Theory Reliability Estimates.
ERIC Educational Resources Information Center
Arnold, Margery E.
It is incorrect to say "the test is reliable" because reliability is a function not only of the test itself, but of many factors. The present paper explains how different factors affect classical reliability estimates such as test-retest, interrater, internal consistency, and equivalent forms coefficients. Furthermore, the limits of classical test…
Sixth Annual Flight Mechanics/Estimation Theory Symposium
NASA Technical Reports Server (NTRS)
Lefferts, E. (Editor)
1981-01-01
Methods of orbital position estimation were reviewed. The problem of accuracy in orbital mechanics is discussed and various techniques in current use are presented along with suggested improvements. Of special interest is the compensation for bias in satelliteborne instruments due to attitude instabilities. Image processing and correctional techniques are reported for geodetic measurements and mapping.
ERIC Educational Resources Information Center
Sinharay, Sandip
2015-01-01
The maximum likelihood estimate (MLE) of the ability parameter of an item response theory model with known item parameters was proved to be asymptotically normally distributed under a set of regularity conditions for tests involving dichotomous items and a unidimensional ability parameter (Klauer, 1990; Lord, 1983). This article first considers…
ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.
ERIC Educational Resources Information Center
Vale, C. David; Gialluca, Kathleen A.
ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…
The assumptions of traditional sampling theory often do not fit the circumstances when estimating the quantity and composition of solid waste arriving at a given location, such as a landfill site, or at a specific point in an industrial or commercial process. The investigator oft...
Why Women Earn Less: The Theory and Estimation of Differential Overqualification
ERIC Educational Resources Information Center
Frank, Robert H.
1978-01-01
A supply mechanism is described whereby nondiscriminating employers are expected to pay lower wages to females than to equally qualified males. Procedures are proposed to estimate the portion of the unexplained male-female wage differential that arises because of family locational considerations. Single copies available from the Secretary, C.…
Application of Consider Covariance to the Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Lundberg, John B.
1996-01-01
The extended Kalman filter (EKF) is the basis for many applications of filtering theory to real-time problems where estimates of the state of a dynamical system are to be computed based upon some set of observations. The form of the EKF may vary somewhat from one application to another, but the fundamental principles are typically unchanged among these various applications. As is the case in many filtering applications, models of the dynamical system (differential equations describing the state variables) and models of the relationship between the observations and the state variables are created. These models typically employ a set of constants whose values are established my means of theory or experimental procedure. Since the estimates of the state are formed assuming that the models are perfect, any modeling errors will affect the accuracy of the computed estimates. Note that the modeling errors may be errors of commission (errors in terms included in the model) or omission (errors in terms excluded from the model). Consequently, it becomes imperative when evaluating the performance of real-time filters to evaluate the effect of modeling errors on the estimates of the state.
The role of experience in location estimation: Target distributions shift location memory biases.
Lipinski, John; Simmering, Vanessa R; Johnson, Jeffrey S; Spencer, John P
2010-04-01
Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. Cognition, 93, 75-97]. This conflicts with earlier results showing that location estimation is biased relative to the spatial distribution of targets [Spencer, J. P., & Hund, A. M. (2002). Prototypes and particulars: Geometric and experience-dependent spatial categories. Journal of Experimental Psychology: General, 131, 16-37]. Here, we resolve this controversy by using a task based on Huttenlocher et al. (Experiment 4) with minor modifications to enhance our ability to detect experience-dependent effects. Results after the first block of trials replicate the pattern reported in Huttenlocher et al. After additional experience, however, participants showed biases that significantly shifted according to the target distributions. These results are consistent with the Dynamic Field Theory, an alternative theory of spatial cognition that integrates long-term memory traces across trials relative to the perceived structure of the task space. Copyright 2009 Elsevier B.V. All rights reserved.
Using optimal transport theory to estimate transition probabilities in metapopulation dynamics
Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.
2017-01-01
This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.
Nonlinear estimation theory applied to orbit determination
NASA Technical Reports Server (NTRS)
Choe, C. Y.
1972-01-01
The development of an approximate nonlinear filter using the Martingale theory and appropriate smoothing properties is considered. Both the first order and the second order moments were estimated. The filter developed can be classified as a modified Gaussian second order filter. Its performance was evaluated in a simulated study of the problem of estimating the state of an interplanetary space vehicle during both a simulated Jupiter flyby and a simulated Jupiter orbiter mission. In addition to the modified Gaussian second order filter, the modified truncated second order filter was also evaluated in the simulated study. Results obtained with each of these filters were compared with numerical results obtained with the extended Kalman filter and the performance of each filter is determined by comparison with the actual estimation errors. The simulations were designed to determine the effects of the second order terms in the dynamic state relations, the observation state relations, and the Kalman gain compensation term. It is shown that the Kalman gain-compensated filter which includes only the Kalman gain compensation term is superior to all of the other filters.
The Effects of Test Length and Sample Size on Item Parameters in Item Response Theory
ERIC Educational Resources Information Center
Sahin, Alper; Anil, Duygu
2017-01-01
This study investigates the effects of sample size and test length on item-parameter estimation in test development utilizing three unidimensional dichotomous models of item response theory (IRT). For this purpose, a real language test comprised of 50 items was administered to 6,288 students. Data from this test was used to obtain data sets of…
ERIC Educational Resources Information Center
Arce-Ferrer, Alvaro J.; Bulut, Okan
2017-01-01
This study examines separate and concurrent approaches to combine the detection of item parameter drift (IPD) and the estimation of scale transformation coefficients in the context of the common item nonequivalent groups design with the three-parameter item response theory equating. The study uses real and synthetic data sets to compare the two…
Introduction to Fuzzy Set Theory
NASA Technical Reports Server (NTRS)
Kosko, Bart
1990-01-01
An introduction to fuzzy set theory is described. Topics covered include: neural networks and fuzzy systems; the dynamical systems approach to machine intelligence; intelligent behavior as adaptive model-free estimation; fuzziness versus probability; fuzzy sets; the entropy-subsethood theorem; adaptive fuzzy systems for backing up a truck-and-trailer; product-space clustering with differential competitive learning; and adaptive fuzzy system for target tracking.
ERIC Educational Resources Information Center
Blodgett, Cynthia S.
2008-01-01
The purpose of this grounded theory study was to examine the process by which people with Mild Traumatic Brain Injury (MTBI) access information on the web. Recent estimates include amateur sports and recreation injuries, non-hospital clinics and treatment facilities, private and public emergency department visits and admissions, providing…
An analysis of the radiation field beneath a bank of tubular quartz lamps
NASA Technical Reports Server (NTRS)
Ash, Robert L.
1972-01-01
Equations governing the incident heat flux distribution beneath a lamp-reflector system were developed. Analysis of a particular radiant heating facility showed good agreement between theory and experiment when a lamp power loss correction was used. In addition, the theory was employed to estimate thermal disruption in the radiation field caused by a protruding probe.
Using Combinatorica/Mathematica for Student Projects in Random Graph Theory
ERIC Educational Resources Information Center
Pfaff, Thomas J.; Zaret, Michele
2006-01-01
We give an example of a student project that experimentally explores a topic in random graph theory. We use the "Combinatorica" package in "Mathematica" to estimate the minimum number of edges needed in a random graph to have a 50 percent chance that the graph is connected. We provide the "Mathematica" code and compare it to the known theoretical…
Tyson L. Swetnam; Christopher D. O' Connor; Ann M. Lynch
2016-01-01
A significant concern about Metabolic Scaling Theory (MST) in real forests relates to consistent differences between the values of power law scaling exponents of tree primary size measures used to estimate mass and those predicted by MST. Here we consider why observed scaling exponents for diameter and height relationships deviate from MST predictions across...
ERIC Educational Resources Information Center
Gu, Fei; Skorupski, William P.; Hoyle, Larry; Kingston, Neal M.
2011-01-01
Ramsay-curve item response theory (RC-IRT) is a nonparametric procedure that estimates the latent trait using splines, and no distributional assumption about the latent trait is required. For item parameters of the two-parameter logistic (2-PL), three-parameter logistic (3-PL), and polytomous IRT models, RC-IRT can provide more accurate estimates…
ERIC Educational Resources Information Center
Abry, Tashia; Cash, Anne H.; Bradshaw, Catherine P.
2014-01-01
Generalizability theory (GT) offers a useful framework for estimating the reliability of a measure while accounting for multiple sources of error variance. The purpose of this study was to use GT to examine multiple sources of variance in and the reliability of school-level teacher and high school student behaviors as observed using the tool,…
ERIC Educational Resources Information Center
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
REVIEWS OF TOPICAL PROBLEMS: Elementary particles and cosmology (Metagalaxy and Universe)
NASA Astrophysics Data System (ADS)
Rozental', I. L.
1997-08-01
The close relation between cosmology and the theory of elementary particles is analyzed in the light of prospects of a unified field theory. The unity of their respective problems and solution methodologies is indicated. The difference between the concepts of 'Metagalaxy' and 'Universe' is emphasized and some possible schemes for estimating the size of the Universe are pointed out.
Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study
NASA Astrophysics Data System (ADS)
Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie
2008-06-01
Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
Estimation of wing nonlinear aerodynamic characteristics at supersonic speeds
NASA Technical Reports Server (NTRS)
Carlson, H. W.; Mack, R. J.
1980-01-01
A computational system for estimation of nonlinear aerodynamic characteristics of wings at supersonic speeds was developed and was incorporated in a computer program. This corrected linearized theory method accounts for nonlinearities in the variation of basic pressure loadings with local surface slopes, predicts the degree of attainment of theoretical leading edge thrust, and provides an estimate of detached leading edge vortex loadings that result when the theoretical thrust forces are not fully realized.
Urban air quality estimation study, phase 1
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1976-01-01
Possibilities are explored for applying estimation theory to the analysis, interpretation, and use of air quality measurements in conjunction with simulation models to provide a cost effective method of obtaining reliable air quality estimates for wide urban areas. The physical phenomenology of real atmospheric plumes from elevated localized sources is discussed. A fluctuating plume dispersion model is derived. Individual plume parameter formulations are developed along with associated a priori information. Individual measurement models are developed.
[Research & development on computer expert system for forensic bones estimation].
Zhao, Jun-ji; Zhang, Jan-zheng; Liu, Nin-guo
2005-08-01
To build an expert system for forensic bones estimation. By using the object oriented method, employing statistical data of forensic anthropology, combining the statistical data frame knowledge representation with productions and also using the fuzzy matching and DS evidence theory method. Software for forensic estimation of sex, age and height with opened knowledge base was designed. This system is reliable and effective, and it would be a good assistant of the forensic technician.
New Theory for Tsunami Propagation and Estimation of Tsunami Source Parameters
NASA Astrophysics Data System (ADS)
Mindlin, I. M.
2007-12-01
In numerical studies based on the shallow water equations for tsunami propagation, vertical accelerations and velocities within the sea water are neglected, so a tsunami is usually supposed to be produced by an initial free surface displacement in the initially still sea. In the present work, new theory for tsunami propagation across the deep sea is discussed, that accounts for the vertical accelerations and velocities. The theory is based on the solutions for the water surface displacement obtained in [Mindlin I.M. Integrodifferential equations in dynamics of a heavy layered liquid. Moscow: Nauka*Fizmatlit, 1996 (Russian)]. The solutions are valid when horizontal dimensions of the initially disturbed area in the sea surface are much larger than the vertical displacement of the surface, which applies to the earthquake tsunamis. It is shown that any tsunami is a combination of specific basic waves found analytically (not superposition: the waves are nonlinear), and consequently, the tsunami source (i.e., the initially disturbed body of water) can be described by the numerable set of the parameters involved in the combination. Thus the problem of theoretical reconstruction of a tsunami source is reduced to the problem of estimation of the parameters. The tsunami source can be modelled approximately with the use of a finite number of the parameters. Two-parametric model is discussed thoroughly. A method is developed for estimation of the model's parameters using the arrival times of the tsunami at certain locations, the maximum wave-heights obtained from tide gauge records at the locations, and the distances between the earthquake's epicentre and each of the locations. In order to evaluate the practical use of the theory, four tsunamis of different magnitude occurred in Japan are considered. For each of the tsunamis, the tsunami energy (E below), the duration of the tsunami source formation T, the maximum water elevation in the wave originating area H, mean radius of the area R, and the average magnitude of the sea surface displacement at the margin of the wave originating area h are estimated using tide gauges records. The results are compared (and, in the author's opinion, are in line) with the estimates known in the literature. Compared to the methods employed in the literature, there is no need to use bathymetry (and, consequently, refraction diagrams) for the estimations. The present paper follows closely earlier works [Mindlin I.M., 1996; Mindlin I.M. J. Appl. Math. Phys. (ZAMP), 2004, vol.55, pp. 781-799] and adds to their theoretical results. Example. The Hiuganada earthquake of 1968, April, 1, 9h 42m JST. A tsunami of moderate size arrived at the coast of the south-western part of Shikoku and the eastern part of Kyushu, Japan. Tsunami parameters listed above are estimated with the theory being discussed for two models of tsunami generation: (a) by initial free surface displacement (the case for numerical studies): E=1.91· 1012J, R=22km, h=17.2cm; and (b) by a sudden change in the velocity field of initially still water: E=8.78· 1012J, R=20.4km, h=9.2cm. These values are in line with known estimates [Soloviev S.L., Go Ch.N. Catalogue of tsunami in the West of Pacific Ocean. Moscow, 1974]: E=1.3· 1013J (attributed to Hatori), E=(1.4 - 2.2)· 1012J (attributed to Aida), R=21.2km, h=20cm [Hatory T., Bull. Earthq. Res. Inst., Tokyo Univ., 1969, vol. 47, pp. 55-63]. Also, estimates are obtained for the values that could not be found based on shallow water wave theory: (a) H=3.43m and (b) H=1.38m, T=16.4sec.
Limitations of Lifting-Line Theory for Estimation of Aileron Hinge-Moment Characteristics
NASA Technical Reports Server (NTRS)
Swanson, Robert S.; Gillis, Clarence L.
1943-01-01
Hinge-moment parameters for several typical ailerons were calculated from section data with the aspect-ratio correction as usually determined from lifting-line theory. The calculations showed that the agreement between experimental and calculated results was unsatisfactory. An additional aspect-ratio correction, calculated by the method of lifting-surface theory, was applied to the slope of the curve of hinge-moment coefficient against angle of attack at small angles of attack. This so-called streamline-curvature correction brought the calculated and experimental results into satisfactory agreement.
PROC IRT: A SAS Procedure for Item Response Theory
Matlock Cole, Ki; Paek, Insu
2017-01-01
This article reviews the procedure for item response theory (PROC IRT) procedure in SAS/STAT 14.1 to conduct item response theory (IRT) analyses of dichotomous and polytomous datasets that are unidimensional or multidimensional. The review provides an overview of available features, including models, estimation procedures, interfacing, input, and output files. A small-scale simulation study evaluates the IRT model parameter recovery of the PROC IRT procedure. The use of the IRT procedure in Statistical Analysis Software (SAS) may be useful for researchers who frequently utilize SAS for analyses, research, and teaching.
Nonlinear system theory: Another look at dependence
Wu, Wei Biao
2005-01-01
Based on the nonlinear system theory, we introduce previously undescribed dependence measures for stationary causal processes. Our physical and predictive dependence measures quantify the degree of dependence of outputs on inputs in physical systems. The proposed dependence measures provide a natural framework for a limit theory for stationary processes. In particular, under conditions with quite simple forms, we present limit theorems for partial sums, empirical processes, and kernel density estimates. The conditions are mild and easily verifiable because they are directly related to the data-generating mechanisms. PMID:16179388
Hamoud Al-Tamimi, Mohammed Sabbih; Sulong, Ghazali; Shuaib, Ibrahim Lutfi
2015-07-01
Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor. Copyright © 2015 Elsevier Inc. All rights reserved.
Computation of acoustic ressure fields produced in feline brain by high-intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Omidi, Nazanin
In 1975, Dunn et al. (JASA 58:512-514) showed that a simple relation describes the ultrasonic threshold for cavitation-induced changes in the mammalian brain. The thresholds for tissue damage were estimated for a variety of acoustic parameters in exposed feline brain. The goal of this study was to improve the estimates for acoustic pressures and intensities present in vivo during those experimental exposures by estimating them using nonlinear rather than linear theory. In our current project, the acoustic pressure waveforms produced in the brains of anesthetized felines were numerically simulated for a spherically focused, nominally f1-transducer (focal length = 13 cm) at increasing values of the source pressure at frequencies of 1, 3, and 9 MHz. The corresponding focal intensities were correlated with the experimental data of Dunn et al. The focal pressure waveforms were also computed at the location of the true maximum. For low source pressures, the computed waveforms were the same as those determined using linear theory, and the focal intensities matched experimentally determined values. For higher source pressures, the focal pressure waveforms became increasingly distorted, with the compressional amplitude of the wave becoming greater, and the rarefactional amplitude becoming lower than the values calculated using linear theory. The implications of these results for clinical exposures are discussed.
NASA Astrophysics Data System (ADS)
Wörner, M.; Cai, X.; Alla, H.; Yue, P.
2018-03-01
The Cox–Voinov law on dynamic spreading relates the difference between the cubic values of the apparent contact angle (θ) and the equilibrium contact angle to the instantaneous contact line speed (U). Comparing spreading results with this hydrodynamic wetting theory requires accurate data of θ and U during the entire process. We consider the case when gravitational forces are negligible, so that the shape of the spreading drop can be closely approximated by a spherical cap. Using geometrical dependencies, we transform the general Cox law in a semi-analytical relation for the temporal evolution of the spreading radius. Evaluating this relation numerically shows that the spreading curve becomes independent from the gas viscosity when the latter is less than about 1% of the drop viscosity. Since inertia may invalidate the made assumptions in the initial stage of spreading, a quantitative criterion for the time when the spherical-cap assumption is reasonable is derived utilizing phase-field simulations on the spreading of partially wetting droplets. The developed theory allows us to compare experimental/computational spreading curves for spherical-cap shaped droplets with Cox theory without the need for instantaneous data of θ and U. Furthermore, the fitting of Cox theory enables us to estimate the effective slip length. This is potentially useful for establishing relationships between slip length and parameters in numerical methods for moving contact lines.
The relationship between offspring size and fitness: integrating theory and empiricism.
Rollinson, Njal; Hutchings, Jeffrey A
2013-02-01
How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.
Structural estimation of a principal-agent model: moral hazard in medical insurance.
Vera-Hernández, Marcos
2003-01-01
Despite the importance of principal-agent models in the development of modern economic theory, there are few estimations of these models. I recover the estimates of a principal-agent model and obtain an approximation to the optimal contract. The results show that out-of-pocket payments follow a concave profile with respect to costs of treatment. I estimate the welfare loss due to moral hazard, taking into account income effects. I also propose a new measure of moral hazard based on the conditional correlation between contractible and noncontractible variables.
Bootstrapping the (A1, A2) Argyres-Douglas theory
NASA Astrophysics Data System (ADS)
Cornagliotto, Martina; Lemos, Madalena; Liendo, Pedro
2018-03-01
We apply bootstrap techniques in order to constrain the CFT data of the ( A 1 , A 2) Argyres-Douglas theory, which is arguably the simplest of the Argyres-Douglas models. We study the four-point function of its single Coulomb branch chiral ring generator and put numerical bounds on the low-lying spectrum of the theory. Of particular interest is an infinite family of semi-short multiplets labeled by the spin ℓ. Although the conformal dimensions of these multiplets are protected, their three-point functions are not. Using the numerical bootstrap we impose rigorous upper and lower bounds on their values for spins up to ℓ = 20. Through a recently obtained inversion formula, we also estimate them for sufficiently large ℓ, and the comparison of both approaches shows consistent results. We also give a rigorous numerical range for the OPE coefficient of the next operator in the chiral ring, and estimates for the dimension of the first R-symmetry neutral non-protected multiplet for small spin.
Chiral perturbation theory and nucleon-pion-state contaminations in lattice QCD
NASA Astrophysics Data System (ADS)
Bär, Oliver
2017-05-01
Multiparticle states with additional pions are expected to be a non-negligible source of excited-state contamination in lattice simulations at the physical point. It is shown that baryon chiral perturbation theory can be employed to calculate the contamination due to two-particle nucleon-pion-states in various nucleon observables. Leading order results are presented for the nucleon axial, tensor and scalar charge and three Mellin moments of parton distribution functions (quark momentum fraction, helicity and transversity moment). Taking into account phenomenological results for the charges and moments the impact of the nucleon-pion-states on lattice estimates for these observables can be estimated. The nucleon-pion-state contribution results in an overestimation of all charges and moments obtained with the plateau method. The overestimation is at the 5-10% level for source-sink separations of about 2 fm. The source-sink separations accessible in contemporary lattice simulations are found to be too small for chiral perturbation theory to be directly applicable.
NASA Astrophysics Data System (ADS)
Anderson, Cynthia Regas
The dissertation considers two different theories of measurement in Kant's Critical philosophy. The first is found in the Critique of Pure Reason. The second is found in the Critique of Judgment. In the former, Kant shows how the size of an object is structured by the necessary rules of the understanding and imagination in terms of its spatial dimensions. In the latter, Kant shows how the actual measurement of this spatial object is estimated. Through a detailed inquiry we argue that, the aesthetic estimation of measurement serves as a precondition for the possibility of spatializing an object. It is only by viewing both components as functioning together, that Kant's account is complete. The first Chapter takes a historical approach to this issue. Kant's Precritical work is considered. The second Chapter examines Kant's theory specifically as found in the Analytic of the First Critique. Finally, the third chapter examines Kant's views on magnitude and measurement in depth in the third Critique. Here we see why this account is needed to condition his prior views.
Totton, Tim S; Misquitta, Alston J; Kraft, Markus
2011-11-24
In this work we assess a recently published anisotropic potential for polycyclic aromatic hydrocarbon (PAH) molecules (J. Chem. Theory Comput. 2010, 6, 683-695). Comparison to recent high-level symmetry-adapted perturbation theory based on density functional theory (SAPT(DFT)) results for coronene (C(24)H(12)) demonstrate the transferability of the potential while highlighting some limitations with simple point charge descriptions of the electrostatic interaction. The potential is also shown to reproduce second virial coefficients of benzene (C(6)H(6)) with high accuracy, and this is enhanced by using a distributed multipole model for the electrostatic interaction. The graphene dimer interaction energy and the exfoliation energy of graphite have been estimated by extrapolation of PAH interaction energies. The contribution of nonlocal fluctuations in the π electron density in graphite have also been estimated which increases the exfoliation energy by 3.0 meV atom(-1) to 47.6 meV atom(-1), which compares well to recent theoretical and experimental results.
Effective Medium Theories for Multicomponent Poroelastic Composites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, J G
2005-02-08
In Biot's theory of poroelasticity, elastic materials contain connected voids or pores and these pores may be filled with fluids under pressure. The fluid pressure then couples to the mechanical effects of stress or strain applied externally to the solid matrix. Eshelby's formula for the response of a single ellipsoidal elastic inclusion in an elastic whole space to a strain imposed at a distant boundary is a very well-known and important result in elasticity. Having a rigorous generalization of Eshelby's results valid for poroelasticity means that the hard part of Eshelby's work (in computing the elliptic integrals needed to evaluatemore » the fourth-rank tensors for inclusions shaped like spheres, oblate and prolate spheroids, needles and disks) can be carried over from elasticity to poroelasticity--and also thermoelasticity--with only relatively minor modifications. Effective medium theories for poroelastic composites such as rocks can then be formulated easily by analogy to well-established methods used for elastic composites. An identity analogous to Eshelby's classic result has been derived [Physical Review Letters 79:1142-1145 (1997)] for use in these more complex and more realistic problems in rock mechanics analysis. Descriptions of the application of this result as the starting point for new methods of estimation are presented, including generalizations of the coherent potential approximation (CPA), differential effective medium (DEM) theory, and two explicit schemes. Results are presented for estimating drained shear and bulk modulus, the Biot-Willis parameter, and Skempton's coefficient. Three of the methods considered appear to be quite reliable estimators, while one of the explicit schemes is found to have some undesirable characteristics.« less
NASA Astrophysics Data System (ADS)
Lu, Yinghui; Clothiaux, Eugene E.; Aydin, Kültegin; Botta, Giovanni; Verlinde, Johannes
2013-12-01
Using the Generalized Multi-particle Mie-method (GMM), Botta et al. (in this issue) [7] created a database of backscattering cross sections for 412 different ice crystal dendrites at X-, Ka- and W-band wavelengths for different incident angles. The Rayleigh-Gans theory, which accounts for interference effects but ignores interactions between different parts of an ice crystal, explains much, but not all, of the variability in the database of backscattering cross sections. Differences between it and the GMM range from -3.5 dB to +2.5 dB and are highly dependent on the incident angle. To explain the residual variability a physically intuitive iterative method was developed to estimate the internal electric field within an ice crystal that accounts for interactions between the neighboring regions within it. After modifying the Rayleigh-Gans theory using this estimated internal electric field, the difference between the estimated backscattering cross sections and those from the GMM method decreased to within 0.5 dB for most of the ice crystals. The largest percentage differences occur when the form factor from the Rayleigh-Gans theory is close to zero. Both interference effects and neighbor interactions are sensitive to the morphology of ice crystals. Improvements in ice-microphysical models are necessary to predict or diagnose internal structures within ice crystals to aid in more accurate interpretation of radar returns. Observations of the morphology of ice crystals are, in turn, necessary to guide the development of such ice-microphysical models and to better understand the statistical properties of ice crystal morphologies in different environmental conditions.
Altamimi, Mohammad A; Neau, Steven H
2016-01-01
Drug dispersed in a polymer can improve bioavailability; dispersed amorphous drug undergoes recrystallization. Solid solutions eliminate amorphous regions, but require a measure of the solubility. Use the Flory-Huggins Theory to predict crystalline drugs solubility in the triblock, graft copolymer Soluplus® to provide a solid solution. Physical mixtures of the two drugs with similar melting points but different glass forming ability, sulfamethoxazole and nifedipine, were prepared with Soluplus® using a quick technique. Drug melting point depression (MPD) was measured using differential scanning calorimetry. The Flory-Huggins Theory allowed: (1) interaction parameter, χ, calculation using MPD data to provide a measure of drug-polymer interaction strength and (2) estimation of the free energy of mixing. A phase diagram was constructed with the MPD data and glass transition temperature (Tg) curves. The interaction parameters with Soluplus® and the free energy of mixing were estimated. Drug solubility was calculated by the intersection of solubility equations and that of MPD and Tg curves in the phase diagram. Negative interaction parameters indicated strong drug-polymer interactions. The phase diagram and solubility equations provided comparable solubility estimates for each drug in Soluplus®. Results using the onset of melting rather than the end of melting support the use of the onset of melting. The Flory-Huggins Theory indicates that Soluplus® interacts effectively with each drug, making solid solution formation feasible. The predicted solubility of the drugs in Soluplus® compared favorably across the methods and supports the use of the onset of melting.
Altamimi, Mohammad A; Neau, Steven H
2016-03-01
Drug dispersed in a polymer can improve bioavailability; dispersed amorphous drug undergoes recrystallization. Solid solutions eliminate amorphous regions, but require a measure of the solubility. Use the Flory-Huggins Theory to predict crystalline drugs solubility in the triblock, graft copolymer Soluplus® to provide a solid solution. Physical mixtures of the two drugs with similar melting points but different glass forming ability, sulfamethoxazole and nifedipine, were prepared with Soluplus® using a quick technique. Drug melting point depression (MPD) was measured using differential scanning calorimetry. The Flory-Huggins Theory allowed: (1) interaction parameter, χ, calculation using MPD data to provide a measure of drug-polymer interaction strength and (2) estimation of the free energy of mixing. A phase diagram was constructed with the MPD data and glass transition temperature (T g ) curves. The interaction parameters with Soluplus® and the free energy of mixing were estimated. Drug solubility was calculated by the intersection of solubility equations and that of MPD and T g curves in the phase diagram. Negative interaction parameters indicated strong drug-polymer interactions. The phase diagram and solubility equations provided comparable solubility estimates for each drug in Soluplus®. Results using the onset of melting rather than the end of melting support the use of the onset of melting. The Flory-Huggins Theory indicates that Soluplus® interacts effectively with each drug, making solid solution formation feasible. The predicted solubility of the drugs in Soluplus® compared favorably across the methods and supports the use of the onset of melting.
A New Understanding for the Rain Rate retrieval of Attenuating Radars Measurement
NASA Astrophysics Data System (ADS)
Koner, P.; Battaglia, A.; Simmer, C.
2009-04-01
The retrieval of rain rate from the attenuated radar (e.g. Cloud Profiling Radar on board of CloudSAT in orbit since June 2006) is a challenging problem. ĹEcuyer and Stephens [1] underlined this difficulty (for rain rates larger than 1.5 mm/h) and suggested the need of additional information (like path-integrated attenuations (PIA) derived from surface reference techniques or precipitation water path estimated from co-located passive microwave radiometer) to constrain the retrieval. It is generally discussed based on the optimal estimation theory that there are no solutions without constraining the problem in a case of visible attenuation because there is no enough information content to solve the problem. However, when the problem is constrained by the additional measurement of PIA, there is a reasonable solution. This raises the spontaneous question: Is all information enclosed in this additional measurement? This also contradicts with the information theory because one measurement can introduce only one degree of freedom in the retrieval. Why is one degree of freedom so important in the above problem? This question cannot be explained using the estimation and information theories of OEM. On the other hand, Koner and Drummond [2] argued that the OEM is basically a regularization method, where a-priori covariance is used as a stabilizer and the regularization strength is determined by the choices of the a-priori and error covariance matrices. The regularization is required for the reduction of the condition number of Jacobian, which drives the noise injection from the measurement and inversion spaces to the state space in an ill-posed inversion. In this work, the above mentioned question will be discussed based on the regularization theory, error mitigation and eigenvalue mathematics. References 1. L'Ecuyer TS and Stephens G. An estimation based precipitation retrieval algorithm for attenuating radar. J. Appl. Met., 2002, 41, 272-85. 2. Koner PK, Drummond JR. A comparison of regularization techniques for atmospheric trace gases retrievals. JQSRT 2008; 109:514-26.
Analyzing Test-Taking Behavior: Decision Theory Meets Psychometric Theory.
Budescu, David V; Bo, Yuanchao
2015-12-01
We investigate the implications of penalizing incorrect answers to multiple-choice tests, from the perspective of both test-takers and test-makers. To do so, we use a model that combines a well-known item response theory model with prospect theory (Kahneman and Tversky, Prospect theory: An analysis of decision under risk, Econometrica 47:263-91, 1979). Our results reveal that when test-takers are fully informed of the scoring rule, the use of any penalty has detrimental effects for both test-takers (they are always penalized in excess, particularly those who are risk averse and loss averse) and test-makers (the bias of the estimated scores, as well as the variance and skewness of their distribution, increase as a function of the severity of the penalty).
On long-only information-based portfolio diversification framework
NASA Astrophysics Data System (ADS)
Santos, Raphael A.; Takada, Hellinton H.
2014-12-01
Using the concepts from information theory, it is possible to improve the traditional frameworks for long-only asset allocation. In modern portfolio theory, the investor has two basic procedures: the choice of a portfolio that maximizes its risk-adjusted excess return or the mixed allocation between the maximum Sharpe portfolio and the risk-free asset. In the literature, the first procedure was already addressed using information theory. One contribution of this paper is the consideration of the second procedure in the information theory context. The performance of these approaches was compared with three traditional asset allocation methodologies: the Markowitz's mean-variance, the resampled mean-variance and the equally weighted portfolio. Using simulated and real data, the information theory-based methodologies were verified to be more robust when dealing with the estimation errors.
NASA Technical Reports Server (NTRS)
Lisano, Michael E.
2007-01-01
Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to nonlinear sequential consider covariance analysis, i.e. in the presence of nonlinear dynamics and nonlinear measurements. A simple SPCF for orbit determination, exemplifying an algorithm hosted in the guidance, navigation and control (GN&C) computer processor of a hypothetical robotic spacecraft, was implemented, and compared with an identically-parameterized (standard) extended, consider-parameterized Kalman filter. The onboard filtering scenario examined is a hypothetical spacecraft orbit about a small natural body with imperfectly-known mass. The formulations, relative complexities, and performances of the filters are compared and discussed.
Prospect theory in the valuation of health.
Moffett, Maurice L; Suarez-Almazor, Maria E
2005-08-01
Prospect theory is the prominent nonexpected utility theory in the estimation of health state preference scores for quality-adjusted life year calculation. Until recently, the theory was not considered to be developed to the point of implementation in economic analysis. This review focuses on the research and evidence that tests the implementation of prospect theory into health state valuation. The typical application of expected utility theory assumes that a decision maker has stable preferences under conditions of risk and uncertainty. Under prospect theory, preferences are dependent on whether the decision maker regards the outcome of a choice as a gain or loss, relative to a reference point. The conceptual preference for standard gamble utilities in the valuation of health states has led to the development of elicitation techniques. Empirical evidence using these techniques indicates that when individual preferences are elicited, a prospect theory consistent framework appears to be necessary for adequate representation of individual health utilities. The relevance of prospect theory to policy making and resource allocation remains to be established. Societal preferences may not need the same attitudes towards risks as individual preferences, and may remain largely risk neutral.
NASA Astrophysics Data System (ADS)
Grenn, Michael W.
This dissertation introduces a theory of information quality to explain macroscopic behavior observed in the systems engineering process. The theory extends principles of Shannon's mathematical theory of communication [1948] and statistical mechanics to information development processes concerned with the flow, transformation, and meaning of information. The meaning of requirements information in the systems engineering context is estimated or measured in terms of the cumulative requirements quality Q which corresponds to the distribution of the requirements among the available quality levels. The requirements entropy framework (REF) implements the theory to address the requirements engineering problem. The REF defines the relationship between requirements changes, requirements volatility, requirements quality, requirements entropy and uncertainty, and engineering effort. The REF is evaluated via simulation experiments to assess its practical utility as a new method for measuring, monitoring and predicting requirements trends and engineering effort at any given time in the process. The REF treats the requirements engineering process as an open system in which the requirements are discrete information entities that transition from initial states of high entropy, disorder and uncertainty toward the desired state of minimum entropy as engineering effort is input and requirements increase in quality. The distribution of the total number of requirements R among the N discrete quality levels is determined by the number of defined quality attributes accumulated by R at any given time. Quantum statistics are used to estimate the number of possibilities P for arranging R among the available quality levels. The requirements entropy H R is estimated using R, N and P by extending principles of information theory and statistical mechanics to the requirements engineering process. The information I increases as HR and uncertainty decrease, and the change in information AI needed to reach the desired state of quality is estimated from the perspective of the receiver. The HR may increase, decrease or remain steady depending on the degree to which additions, deletions and revisions impact the distribution of R among the quality levels. Current requirements trend metrics generally treat additions, deletions and revisions the same and simply measure the quantity of these changes over time. The REF evaluates the quantity of requirements changes over time, distinguishes between their positive and negative effects by calculating their impact on HR, Q, and AI, and forecasts when the desired state will be reached, enabling more accurate assessment of the status and progress of the requirements engineering effort. Results from random variable simulations suggest the REF is an improved leading indicator of requirements trends that can be readily combined with current methods. The increase in I, or decrease in H R and uncertainty, is proportional to the engineering effort E input into the requirements engineering process. The REF estimates the AE needed to transition R from their current state of quality to the desired end state or some other interim state of interest. Simulation results are compared with measured engineering effort data for Department of Defense programs published in the SE literature, and the results suggest the REF is a promising new method for estimation of AE.
NASA Astrophysics Data System (ADS)
Veneziano, D.; Langousis, A.; Lepore, C.
2009-12-01
The annual maximum of the average rainfall intensity in a period of duration d, Iyear(d), is typically assumed to have generalized extreme value (GEV) distribution. The shape parameter k of that distribution is especially difficult to estimate from either at-site or regional data, making it important to constraint k using theoretical arguments. In the context of multifractal representations of rainfall, we observe that standard theoretical estimates of k from extreme value (EV) and extreme excess (EE) theories do not apply, while estimates from large deviation (LD) theory hold only for very small d. We then propose a new theoretical estimator based on fitting GEV models to the numerically calculated distribution of Iyear(d). A standard result from EV and EE theories is that k depends on the tail behavior of the average rainfall in d, I(d). This result holds if Iyear(d) is the maximum of a sufficiently large number n of variables, all distributed like I(d); therefore its applicability hinges on whether n = 1yr/d is large enough and the tail of I(d) is sufficiently well known. One typically assumes that at least for small d the former condition is met, but poor knowledge of the upper tail of I(d) remains an obstacle for all d. In fact, in the case of multifractal rainfall, also the first condition is not met because, irrespective of d, 1yr/d is too small (Veneziano et al., 2009, WRR, in press). Applying large deviation (LD) theory to this multifractal case, we find that, as d → 0, Iyear(d) approaches a GEV distribution whose shape parameter kLD depends on a region of the distribution of I(d) well below the upper tail, is always positive (in the EV2 range), is much larger than the value predicted by EV and EE theories, and can be readily found from the scaling properties of I(d). The scaling properties of rainfall can be inferred also from short records, but the limitation remains that the result holds under d → 0 not for finite d. Therefore, for different reasons, none of the above asymptotic theories applies to Iyear(d). In practice, one is interested in the distribution of Iyear(d) over a finite range of averaging durations d and return periods T. Using multifractal representations of rainfall, we have numerically calculated the distribution of Iyear(d) and found that, although not GEV, the distribution can be accurately approximated by a GEV model. The best-fitting parameter k depends on d, but is insensitive to the scaling properties of rainfall and the range of return periods T used for fitting. We have obtained a default expression for k(d) and compared it with estimates from historical rainfall records. The theoretical function tracks well the empirical dependence on d, although it generally overestimates the empirical k values, possibly due to deviations of rainfall from perfect scaling. This issue is under investigation.
NASA Technical Reports Server (NTRS)
Kobayashi, H.
1978-01-01
Two dimensional, quasi three dimensional and three dimensional theories for the prediction of pure tone fan noise due to the interaction of inflow distortion with a subsonic annular blade row were studied with the aid of an unsteady three dimensional lifting surface theory. The effects of compact and noncompact source distributions on pure tone fan noise in an annular cascade were investigated. Numerical results show that the strip theory and quasi three-dimensional theory are reasonably adequate for fan noise prediction. The quasi three-dimensional method is more accurate for acoustic power and model structure prediction with an acoustic power estimation error of about plus or minus 2db.
Application of the quantum spin glass theory to image restoration.
Inoue, J I
2001-04-01
Quantum fluctuation is introduced into the Markov random-field model for image restoration in the context of a Bayesian approach. We investigate the dependence of the quantum fluctuation on the quality of a black and white image restoration by making use of statistical mechanics. We find that the maximum posterior marginal (MPM) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posteriori estimate or the thermal fluctuation based MPM estimate.
Wall shear stress estimates in coronary artery constrictions
NASA Technical Reports Server (NTRS)
Back, L. H.; Crawford, D. W.
1992-01-01
Wall shear stress estimates from laminar boundary layer theory were found to agree fairly well with the magnitude of shear stress levels along coronary artery constrictions obtained from solutions of the Navier Stokes equations for both steady and pulsatile flow. The relatively simple method can be used for in vivo estimates of wall shear stress in constrictions by using a vessel shape function determined from a coronary angiogram, along with a knowledge of the flow rate.
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.
Wang, Lan; Kim, Yongdai; Li, Runze
2013-10-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION
Wang, Lan; Kim, Yongdai; Li, Runze
2014-01-01
We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843
Exact dimension estimation of interacting qubit systems assisted by a single quantum probe
NASA Astrophysics Data System (ADS)
Sone, Akira; Cappellaro, Paola
2017-12-01
Estimating the dimension of an Hilbert space is an important component of quantum system identification. In quantum technologies, the dimension of a quantum system (or its corresponding accessible Hilbert space) is an important resource, as larger dimensions determine, e.g., the performance of quantum computation protocols or the sensitivity of quantum sensors. Despite being a critical task in quantum system identification, estimating the Hilbert space dimension is experimentally challenging. While there have been proposals for various dimension witnesses capable of putting a lower bound on the dimension from measuring collective observables that encode correlations, in many practical scenarios, especially for multiqubit systems, the experimental control might not be able to engineer the required initialization, dynamics, and observables. Here we propose a more practical strategy that relies not on directly measuring an unknown multiqubit target system, but on the indirect interaction with a local quantum probe under the experimenter's control. Assuming only that the interaction model is given and the evolution correlates all the qubits with the probe, we combine a graph-theoretical approach and realization theory to demonstrate that the system dimension can be exactly estimated from the model order of the system. We further analyze the robustness in the presence of background noise of the proposed estimation method based on realization theory, finding that despite stringent constrains on the allowed noise level, exact dimension estimation can still be achieved.
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Rolland, Jannick P.
2014-03-01
The prevalence of Dry Eye Disease (DED) in the USA is approximately 40 million in aging adults with about $3.8 billion economic burden. However, a comprehensive understanding of tear film dynamics, which is the prerequisite to advance the management of DED, is yet to be realized. To extend our understanding of tear film dynamics, we investigate the simultaneous estimation of the lipid and aqueous layers thicknesses with the combination of optical coherence tomography (OCT) and statistical decision theory. In specific, we develop a mathematical model for Fourier-domain OCT where we take into account the different statistical processes associated with the imaging chain. We formulate the first-order and second-order statistical quantities of the output of the OCT system, which can generate some simulated OCT spectra. A tear film model, which includes a lipid and aqueous layer on top of a rough corneal surface, is the object being imaged. Then we further implement a Maximum-likelihood (ML) estimator to interpret the simulated OCT data to estimate the thicknesses of both layers of the tear film. Results show that an axial resolution of 1 μm allows estimates down to nanometers scale. We use the root mean square error of the estimates as a metric to evaluate the system parameters, such as the tradeoff between the imaging speed and the precision of estimation. This framework further provides the theoretical basics to optimize the imaging setup for a specific thickness estimation task.
ERIC Educational Resources Information Center
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan
2016-01-01
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming…
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2013-01-01
A classic topic in the fields of psychometrics and measurement has been the impact of the number of scale categories on test score reliability. This study builds on previous research by further articulating the relationship between item response theory (IRT) and classical test theory (CTT). Equations are presented for comparing the reliability and…
Experiments and Reaction Models of Fundamental Combustion Properties
2010-05-31
in liquid hydrocarbon flames Lennard - Jones 12-6 potential parameters were estimated for n-alkanes and 1-alkenes with carbon numbers ranging from 5...hydrocarbons, were studied both experimentally and numerically. The fuel mixtures were chosen in order to gain insight into potential kinetic couplings...initio electronic structure theory, transition state theory, and master equation modelling. The potential energy surface was examined with the coupled
NASA Technical Reports Server (NTRS)
Callier, Frank M.; Desoer, Charles A.
1991-01-01
The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.
Fake News, Conspiracy Theories, and Lies: An Information Laundering Model for Homeland Security
2018-03-01
THEORIES, AND LIES: AN INFORMATION LAUNDERING MODEL FOR HOMELAND SECURITY by Samantha M. Korta March 2018 Co-Advisors: Rodrigo Nieto...for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing...data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden
Space Vehicle Guidance, Navigation, Control, and Estimation Operations Technologies
2018-03-29
angular position around the ellipse, and the out-of-place amplitude and angular position. These elements are explicitly relatable to the six rectangular...quasi) second order relative orbital elements are explored. One theory uses the expanded solution form and introduces several instantaneous ellipses...In each case, the theory quantifies distortion of the first order relative orbital elements when including second order effects. The new variables are
ERIC Educational Resources Information Center
Samejima, Fumiko
In latent trait theory the latent space, or space of the hypothetical construct, is usually represented by some unidimensional or multi-dimensional continuum of real numbers. Like the latent space, the item response can either be treated as a discrete variable or as a continuous variable. Latent trait theory relates the item response to the latent…
ERIC Educational Resources Information Center
Eignor, Daniel R.; Douglass, James B.
This paper attempts to provide some initial information about the use of a variety of item response theory (IRT) models in the item selection process; its purpose is to compare the information curves derived from the selection of items characterized by several different IRT models and their associated parameter estimation programs. These…
Towards a General Theory of Counterdeception
2015-02-20
AFRL-OSR-VA-TR-2015-0067 TOWARDS A GENERAL THEORY OF COUNTERDECEPTION Scott Craver RESEARCH FOUNDATION OF STATE UNIVERSITY OF NEW YORK THE Final...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding
Verification of the Velocity Structure in Mexico Basin Using the H/V Spectral Ratio of Microtremors
NASA Astrophysics Data System (ADS)
Matsushima, S.; Sanchez-Sesma, F. J.; Nagashima, F.; Kawase, H.
2011-12-01
The authors have been proposing a new theory to calculate the Horizontal-to-Vertical (H/V) spectral ratio of microtremors assuming that the wave field is completely diffuse and have attempted to apply the theory to understand the observed microtremor data. It is anticipated that this new theory can be applied to detect the subsurface velocity structure beneath urban area. Precise information about the subsurface velocity structure is essential for predicting strong ground motion accurately, which is necessary to mitigate seismic disaster. Mexico basin, who witnessed severe damage during the 1985 Michoacán Earthquake (Ms 8.1) several hundreds of kilometers away from the source region, is an interesting location in which the reassessment of soil properties is urgent. Because of subsidence, having improved estimates of properties is mandatory. In order to estimate possible changes in the velocity structure in the Mexico basin, we measured microtremors at strong motion observation sites in Mexico City. At those sites, information about the velocity profiles are available. Using the obtained data, we derive observed H/V spectral ratio and compare it with the theoretical H/V spectral ratio to gauge the goodness of our new theory. First we compared the observed H/V spectral ratios for five stations to see the diverse characteristics of this measurement. Then we compared the observed H/V spectral ratios with the theoretical predictions to confirm our theory. We assumed the velocity model of previous surveys at the strong motions observation sites as an initial model. We were able to closely fit both the peak frequency and amplitude of the observed H/V spectral ratio, by the theoretical H/V spectral ratio calculated by our new method. These results show that we have a good initial model. However, the theoretical estimates need some improvement to perfectly fit the observed H/V spectral ratio. This may be an indication that the initial model needs some adjustments. We explore how to improve the velocity model based on the comparison between observations and theory.
Novikov, Alexander
2010-01-01
A complete time-dependent physics theory of symmetric unperturbed driven Hybrid Birdcage resonator was developed for general application. In particular, the theory can be applied for RF coil engineering, computer simulations of coil-sample interaction, etc. Explicit time dependence is evaluated for different forms of driving voltage. The major steps of the solution development are shown and appropriate explanations are given. Green’s functions and spectral density formula were developed for any form of periodic driving voltage. The concept of distributed power losses based on transmission line theory is developed for evaluation of local losses of a coil. Three major types of power losses are estimated as equivalent series resistances in the circuit of the Birdcage resonator. Values of generated resistances in Legs and End-Rings are estimated. An application of the theory is shown for many practical cases. Experimental curve of B1 field polarization dependence is measured for eight-sections Birdcage coil. It was shown, that the steady-state driven resonance frequencies do not depend on damping factor unlike the free oscillation (transient) frequencies. An equivalent active resistance is generated due to interaction of RF electromagnetic field with a sample. Resistance of the conductor (enhanced by skin effect), Eddy currents and dielectric losses are the major types of losses which contribute to the values of generated resistances. A biomedical sample for magnetic resonance imaging and spectroscopy is the source of the both Eddy current and dielectric losses of a coil. As demonstrated by the theory, Eddy currents losses is the major effect of coil shielding. PMID:20869184
Effects of Differential Item Functioning on Examinees' Test Performance and Reliability of Test
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2017-01-01
Simulations were conducted to examine the effect of differential item functioning (DIF) on measurement consequences such as total scores, item response theory (IRT) ability estimates, and test reliability in terms of the ratio of true-score variance to observed-score variance and the standard error of estimation for the IRT ability parameter. The…
Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method
ERIC Educational Resources Information Center
Liu, Yuming; Schulz, E. Matthew; Yu, Lei
2008-01-01
A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…
Treating Estimation and Mental Computation as Situated Mathematical Processes.
ERIC Educational Resources Information Center
Silver, Edward A.
This paper discusses the central thesis that new research on estimation and mental computation will benefit from more focused attention on the situations in which they are used. In the first section of the paper, a brief discussion of cognitive theory, with special attention to the emerging notion of situated cognition is presented. Three sources…
Fisher, Sir Ronald Aylmer (1890-1962)
NASA Astrophysics Data System (ADS)
Murdin, P.
2000-11-01
Statistician, born in London, England. After studying astronomy using AIRY's manual on the Theory of Errors he became interested in statistics, and laid the foundation of randomization in experimental design, the analysis of variance and the use of data in estimating the properties of the parent population from which it was drawn. Invented the maximum likelihood method for estimating from random ...
A quantitative framework for estimating risk of collision between marine mammals and boats
Martin, Julien; Sabatier, Quentin; Gowan, Timothy A.; Giraud, Christophe; Gurarie, Eliezer; Calleson, Scott; Ortega-Ortiz, Joel G.; Deutsch, Charles J.; Rycyk, Athena; Koslovsky, Stacie M.
2016-01-01
By applying encounter rate theory to the case of boat collisions with marine mammals, we gained new insights about encounter processes between wildlife and watercraft. Our work emphasizes the importance of considering uncertainty when estimating wildlife mortality. Finally, our findings are relevant to other systems and ecological processes involving the encounter between moving agents.
Jamie S. Sanderlin; Peter M. Waser; James E. Hines; James D. Nichols
2012-01-01
Metapopulation ecology has historically been rich in theory, yet analytical approaches for inferring demographic relationships among local populations have been few. We show how reverse-time multi-state captureÂrecapture models can be used to estimate the importance of local recruitment and interpopulation dispersal to metapopulation growth. We use 'contribution...
ERIC Educational Resources Information Center
Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H.
2015-01-01
When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically…
Upper-Bound Estimates Of SEU in CMOS
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
1990-01-01
Theory of single-event upsets (SEU) (changes in logic state caused by energetic charged subatomic particles) in complementary metal oxide/semiconductor (CMOS) logic devices extended to provide upper-bound estimates of rates of SEU when limited experimental information available and configuration and dimensions of SEU-sensitive regions of devices unknown. Based partly on chord-length-distribution method.
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
ERIC Educational Resources Information Center
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately…
A Comparison of IRT Proficiency Estimation Methods under Adaptive Multistage Testing
ERIC Educational Resources Information Center
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook
2015-01-01
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis
ERIC Educational Resources Information Center
Edwards, Michael C.
2010-01-01
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…
ERIC Educational Resources Information Center
de la Torre, Jimmy; Patz, Richard J.
2005-01-01
This article proposes a practical method that capitalizes on the availability of information from multiple tests measuring correlated abilities given in a single test administration. By simultaneously estimating different abilities with the use of a hierarchical Bayesian framework, more precise estimates for each ability dimension are obtained.…
ERIC Educational Resources Information Center
Dai, Yunyun
2013-01-01
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
An Extension of Least Squares Estimation of IRT Linking Coefficients for the Graded Response Model
ERIC Educational Resources Information Center
Kim, Seonghoon
2010-01-01
The three types (generalized, unweighted, and weighted) of least squares methods, proposed by Ogasawara, for estimating item response theory (IRT) linking coefficients under dichotomous models are extended to the graded response model. A simulation study was conducted to confirm the accuracy of the extended formulas, and a real data study was…
A Generative Theory of Relevance
2004-09-01
73 5.3.1.4 Parameter estimation with a dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 5.3.1.5 Document ranking...engine [3]. The stemmer combines morphological rules with a large dictionary of special cases and exceptions. After stemming, 418 stop-words from the...goes over all Arabic training strings. Bulgarian definitions are identical. 5.3.1.4 Parameter estimation with a dictionary Parallel and comparable
USDA-ARS?s Scientific Manuscript database
Quantifying global carbon and water balances requires accurate estimation of gross primary production (GPP) and evapotranspiration (ET), respectively, across space and time. Models that are based on the theory of light use efficiency (LUE) and water use efficiency (WUE) have emerged as efficient met...
Limits on estimating the width of thin tubular structures in 3D images.
Wörz, Stefan; Rohr, Karl
2006-01-01
This work studies limits on estimating the width of thin tubular structures in 3D images. Based on nonlinear estimation theory we analyze the minimal stochastic error of estimating the width. Given a 3D analytic model of the image intensities of tubular structures, we derive a closed-form expression for the Cramér-Rao bound of the width estimate under image noise. We use the derived lower bound as a benchmark and compare it with three previously proposed accuracy limits for vessel width estimation. Moreover, by experimental investigations we demonstrate that the derived lower bound can be achieved by fitting a 3D parametric intensity model directly to the image data.
Knipfer, T; Fei, J; Gambetta, G A; Shackel, K A; Matthews, M A
2014-10-21
The cell-pressure-probe is a unique tool to study plant water relations in-situ. Inaccuracy in the estimation of cell volume (νo) is the major source of error in the calculation of both cell volumetric elastic modulus (ε) and cell hydraulic conductivity (Lp). Estimates of νo and Lp can be obtained with the pressure-clamp (PC) and pressure-relaxation (PR) methods. In theory, both methods should result in comparable νo and Lp estimates, but this has not been the case. In this study, the existing νo-theories for PC and PR methods were reviewed and clarified. A revised νo-theory was developed that is equally valid for the PC and PR methods. The revised theory was used to determine νo for two extreme scenarios of solute mixing between the experimental cell and sap in the pressure probe microcapillary. Using a fully automated cell-pressure-probe (ACPP) on leaf epidermal cells of Tradescantia virginiana, the validity of the revised theory was tested with experimental data. Calculated νo values from both methods were in the range of optically determined νo (=1.1-5.0nL) for T. virginiana. However, the PC method produced a systematically lower (21%) calculated νo compared to the PR method. Effects of solute mixing could only explain a potential error in calculated νo of <3%. For both methods, this discrepancy in νo was almost identical to the discrepancy in the measured ratio of ΔV/ΔP (total change in microcapillary sap volume versus corresponding change in cell turgor) of 19%, which is a fundamental parameter in calculating νo. It followed from the revised theory that the ratio of ΔV/ΔP was inversely related to the solute reflection coefficient. This highlighted that treating the experimental cell as an ideal osmometer in both methods is potentially not correct. Effects of non-ideal osmotic behavior by transmembrane solute movement may be minimized in the PR as compared to the PC method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Integrated detection, estimation, and guidance in pursuit of a maneuvering target
NASA Astrophysics Data System (ADS)
Dionne, Dany
The thesis focuses on efficient solutions of non-cooperative pursuit-evasion games with imperfect information on the state of the system. This problem is important in the context of interception of future maneuverable ballistic missiles. However, the theoretical developments are expected to find application to a broad class of hybrid control and estimation problems in industry. The validity of the results is nevertheless confirmed using a benchmark problem in the area of terminal guidance. A specific interception scenario between an incoming target with no information and a single interceptor missile with noisy measurements is analyzed in the form of a linear hybrid system subject to additive abrupt changes. The general research is aimed to achieve improved homing accuracy by integrating ideas from detection theory, state estimation theory and guidance. The results achieved can be summarized as follows. (i) Two novel maneuver detectors are developed to diagnose abrupt changes in a class of hybrid systems (detection and isolation of evasive maneuvers): a new implementation of the GLR detector and the novel adaptive- H0 GLR detector. (ii) Two novel state estimators for target tracking are derived using the novel maneuver detectors. The state estimators employ parameterized family of functions to described possible evasive maneuvers. (iii) A novel adaptive Bayesian multiple model predictor of the ballistic miss is developed which employs semi-Markov models and ideas from detection theory. (iv) A novel integrated estimation and guidance scheme that significantly improves the homing accuracy is also presented. The integrated scheme employs banks of estimators and guidance laws, a maneuver detector, and an on-line governor; the scheme is adaptive with respect to the uncertainty affecting the probability density function of the filtered state. (v) A novel discretization technique for the family of continuous-time, game theoretic, bang-bang guidance laws is introduced. The performance of the novel algorithms is assessed for the scenario of a pursuit-evasion engagement between a randomly maneuvering ballistic missile and an interceptor. Extensive Monte Carlo simulations are employed to evaluate the main statistical properties of the algorithms. (Abstract shortened by UMI.)
Truong, Q T; Nguyen, Q V; Truong, V T; Park, H C; Byun, D Y; Goo, N S
2011-09-01
We present an unsteady blade element theory (BET) model to estimate the aerodynamic forces produced by a freely flying beetle and a beetle-mimicking flapping wing system. Added mass and rotational forces are included to accommodate the unsteady force. In addition to the aerodynamic forces needed to accurately estimate the time history of the forces, the inertial forces of the wings are also calculated. All of the force components are considered based on the full three-dimensional (3D) motion of the wing. The result obtained by the present BET model is validated with the data which were presented in a reference paper. The difference between the averages of the estimated forces (lift and drag) and the measured forces in the reference is about 5.7%. The BET model is also used to estimate the force produced by a freely flying beetle and a beetle-mimicking flapping wing system. The wing kinematics used in the BET calculation of a real beetle and the flapping wing system are captured using high-speed cameras. The results show that the average estimated vertical force of the beetle is reasonably close to the weight of the beetle, and the average estimated thrust of the beetle-mimicking flapping wing system is in good agreement with the measured value. Our results show that the unsteady lift and drag coefficients measured by Dickinson et al are still useful for relatively higher Reynolds number cases, and the proposed BET can be a good way to estimate the force produced by a flapping wing system.
Estimation of the ARNO model baseflow parameters using daily streamflow data
NASA Astrophysics Data System (ADS)
Abdulla, F. A.; Lettenmaier, D. P.; Liang, Xu
1999-09-01
An approach is described for estimation of baseflow parameters of the ARNO model, using historical baseflow recession sequences extracted from daily streamflow records. This approach allows four of the model parameters to be estimated without rainfall data, and effectively facilitates partitioning of the parameter estimation procedure so that parsimonious search procedures can be used to estimate the remaining storm response parameters separately. Three methods of optimization are evaluated for estimation of four baseflow parameters. These methods are the downhill Simplex (S), Simulated Annealing combined with the Simplex method (SA) and Shuffled Complex Evolution (SCE). These estimation procedures are explored in conjunction with four objective functions: (1) ordinary least squares; (2) ordinary least squares with Box-Cox transformation; (3) ordinary least squares on prewhitened residuals; (4) ordinary least squares applied to prewhitened with Box-Cox transformation of residuals. The effects of changing the seed random generator for both SA and SCE methods are also explored, as are the effects of the bounds of the parameters. Although all schemes converge to the same values of the objective function, SCE method was found to be less sensitive to these issues than both the SA and the Simplex schemes. Parameter uncertainty and interactions are investigated through estimation of the variance-covariance matrix and confidence intervals. As expected the parameters were found to be correlated and the covariance matrix was found to be not diagonal. Furthermore, the linearized confidence interval theory failed for about one-fourth of the catchments while the maximum likelihood theory did not fail for any of the catchments.
NASA Astrophysics Data System (ADS)
Weibust, E.
Improvements to a missile aerodynamics program which enable it to (a) calculate aerodynamic coefficients as input for a flight mechanics model, (b) check manufacturers' data or estimate performance from photographs, (c) reduce wind tunnel testing, and (d) aid optimization studies, are discussed. Slender body theory is used for longitudinal damping derivatives prediction. Program predictions were compared to known values. Greater accuracy is required in the estimation of drag due to excrescences on actual missile configurations, the influence of a burning motor, and nonlinear effects in the stall region. Prediction of pressure centers on wings and on bodies in presence of wings must be improved.
Simmering, Vanessa R
2016-09-01
Working memory is a vital cognitive skill that underlies a broad range of behaviors. Higher cognitive functions are reliably predicted by working memory measures from two domains: children's performance on complex span tasks, and infants' performance in looking paradigms. Despite the similar predictive power across these research areas, theories of working memory development have not connected these different task types and developmental periods. The current project takes a first step toward bridging this gap by presenting a process-oriented theory, focusing on two tasks designed to assess visual working memory capacity in infants (the change-preference task) versus children and adults (the change detection task). Previous studies have shown inconsistent results, with capacity estimates increasing from one to four items during infancy, but only two to three items during early childhood. A probable source of this discrepancy is the different task structures used with each age group, but prior theories were not sufficiently specific to explain how performance relates across tasks. The current theory focuses on cognitive dynamics, that is, how memory representations are formed, maintained, and used within specific task contexts over development. This theory was formalized in a computational model to generate three predictions: 1) capacity estimates in the change-preference task should continue to increase beyond infancy; 2) capacity estimates should be higher in the change-preference versus change detection task when tested within individuals; and 3) performance should correlate across tasks because both rely on the same underlying memory system. I also tested a fourth prediction, that development across tasks could be explained through increasing real-time stability, realized computationally as strengthening connectivity within the model. Results confirmed these predictions, supporting the cognitive dynamics account of performance and developmental changes in real-time stability. The monograph concludes with implications for understanding memory, behavior, and development in a broader range of cognitive development. © 2016 The Society for Research in Child Development, Inc.
Towards an exact correlated orbital theory for electrons
NASA Astrophysics Data System (ADS)
Bartlett, Rodney J.
2009-12-01
The formal and computational attraction of effective one-particle theories like Hartree-Fock and density functional theory raise the question of how far such approaches can be taken to offer exact results for selected properties of electrons in atoms, molecules, and solids. Some properties can be exactly described within an effective one-particle theory, like principal ionization potentials and electron affinities. This fact can be used to develop equations for a correlated orbital theory (COT) that guarantees a correct one-particle energy spectrum. They are built upon a coupled-cluster based frequency independent self-energy operator presented here, which distinguishes the approach from Dyson theory. The COT also offers an alternative to Kohn-Sham density functional theory (DFT), whose objective is to represent the electronic density exactly as a single determinant, while paying less attention to the energy spectrum. For any estimate of two-electron terms COT offers a litmus test of its accuracy for principal Ip's and Ea's. This feature for approximating the COT equations is illustrated numerically.
Zou, J; Saven, J G
2000-02-11
A self-consistent theory is presented that can be used to estimate the number and composition of sequences satisfying a predetermined set of constraints. The theory is formulated so as to examine the features of sequences having a particular value of Delta=E(f)-
NASA Technical Reports Server (NTRS)
Lee, Timothy J.; Head-Gordon, Martin; Rendell, Alistair P.; Langhoff, Stephen R. (Technical Monitor)
1995-01-01
A diagnostic for perturbation theory calculations, S(sub 2), is defined and numerical results are compared to the established T(sub 1) diagnostic from coupled-cluster theory. S(sub 2) is the lowest order non-zero contribution to a perturbation expansion of T(sub 1). S(sub 2) is a reasonable estimate of the importance of non-dynamical electron correlation, although not as reliable as T(sub 1). S(sub 2) values less than or equal to 0.012 suggest that low orders of perturbation theory should yield reasonable results; S(sub 2) values between 0.012-0.015 suggest that caution is required in interpreting results from low orders of perturbation theory; S(sub 2) values greater than or equal to 0.015 indicate that low orders of perturbation theory are not reliable for accurate results. Although not required mathematically, S(sub 2) is always less than T(sub 1) for the examples studied here.
Performance of some nucleation theories with a nonsharp droplet-vapor interface.
Napari, Ismo; Julin, Jan; Vehkamäki, Hanna
2010-10-21
Nucleation theories involving the concept of nonsharp boundary between the droplet and vapor are compared to recent molecular dynamics (MD) simulation data of Lennard-Jones vapors at temperatures above the triple point. The theories are diffuse interface theory (DIT), extended modified liquid drop-dynamical nucleation theory (EMLD-DNT), square gradient theory (SGT), and density functional theory (DFT). Particular attention is paid to thermodynamic consistency in the comparison: the applied theories either use or, with a proper parameter adjustment, result in the same values of equilibrium vapor pressure, bulk liquid density, and surface tension as the MD simulations. Realistic pressure-density correlations are also used. The best agreement between the simulated nucleation rates and calculations is obtained from DFT, SGT, and EMLD-DNT, all of which, in the studied temperature range, show deviations of less than one order of magnitude in the nucleation rate. DIT underestimates the nucleation rate by up to two orders of magnitude. DFT and SGT give the best estimate of the molecular content of the critical nuclei. Overall, at the vapor conditions of this study, all the investigated theories perform better than classical nucleation theory in predicting nucleation rates.
Fifth Annual Flight Mechanics/Estimation Theory Symposium
NASA Technical Reports Server (NTRS)
Teles, J. (Editor)
1980-01-01
Various aspects of astrodynamics are considered including orbit calculations and trajectory determination. Other topics dealing with remote sensing systems, satellite navigation, and attitude control are included.
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Reig, L; Amigó, V; Busquets, D; Calero, J A; Ortiz, J L
2012-08-01
Porous Ti6Al4V samples were produced by microsphere sintering. The Zero-Order Reaction Rate Model and Transition State Theory were used to model the sintering process and to estimate the bending strength of the porous samples developed. The evolution of the surface area during the sintering process was used to obtain sintering parameters (sintering constant, activation energy, frequency factor, constant of activation and Gibbs energy of activation). These were then correlated with the bending strength in order to obtain a simple model with which to estimate the evolution of the bending strength of the samples when the sintering temperature and time are modified: σY=P+B·[lnT·t-ΔGa/R·T]. Although the sintering parameters were obtained only for the microsphere sizes analysed here, the strength of intermediate sizes could easily be estimated following this model. Copyright © 2012 Elsevier B.V. All rights reserved.
Fuzzy Modal Control Applied to Smart Composite Structure
NASA Astrophysics Data System (ADS)
Koroishi, E. H.; Faria, A. W.; Lara-Molina, F. A.; Steffen, V., Jr.
2015-07-01
This paper proposes an active vibration control technique, which is based on Fuzzy Modal Control, as applied to a piezoelectric actuator bonded to a composite structure forming a so-called smart composite structure. Fuzzy Modal Controllers were found to be well adapted for controlling structures with nonlinear behavior, whose characteristics change considerably with respect to time. The smart composite structure was modelled by using a so called mixed theory. This theory uses a single equivalent layer for the discretization of the mechanical displacement field and a layerwise representation of the electrical field. Temperature effects are neglected. Due to numerical reasons it was necessary to reduce the size of the model of the smart composite structure so that the design of the controllers and the estimator could be performed. The role of the Kalman Estimator in the present contribution is to estimate the modal states of the system, which are used by the Fuzzy Modal controllers. Simulation results illustrate the effectiveness of the proposed vibration control methodology for composite structures.
An opening criterion for dust gaps in protoplanetary discs
NASA Astrophysics Data System (ADS)
Dipierro, Giovanni; Laibe, Guillaume
2017-08-01
We aim to understand under which conditions a low-mass planet can open a gap in viscous dusty protoplanetary discs. For this purpose, we extend the theory of dust radial drift to include the contribution from the tides of an embedded planet and from the gas viscous forces. From this formalism, we derive (I) a grain-size-dependent criterion for dust gap opening in discs, (II) an estimate of the location of the outer edge of the dust gap and (III) an estimate of the minimum Stokes number above which low-mass planets are able to carve gaps that appear only in the dust disc. These analytical estimates are particularly helpful to appraise the minimum mass of a hypothetical planet carving gaps in discs observed at long wavelengths and high resolution. We validate the theory against 3D smoothed particle hydrodynamics simulations of planet-disc interaction in a broad range of dusty protoplanetary discs. We find a remarkable agreement between the theoretical model and the numerical experiments.
NASA Technical Reports Server (NTRS)
Song, Y. Tony
2006-01-01
The Asian Marginal Seas are interconnected by a number of narrow straits, such as the Makassar Strait connecting the Pacific Ocean with the Indian Ocean, the Luzon Strait connecting the South China Sea with the Pacific Ocean, and the Korea/Tsushima Strait connecting the East China Sea with the Japan/East Sea. Here we propose a method, the combination of the "geostrophic control" formula of Garrett and Toulany (1982) and the "hydraulic control" theory of Whitehead et al. (1974), allowing the use of satellite-observed sea-surface-height (SSH) and ocean-bottom-pressure (OBP) data for estimating interbasin transport. The new method also allows separating the interbasin transport into surface and bottom fluxes that play an important role in maintaining the mass balance of the regional oceans. Comparison with model results demonstrates that the combined method can estimate the seasonal variability of the strait transports and is significantly better than the method of using SSH or OBP alone.
Estimating the Wet-Rock P-Wave Velocity from the Dry-Rock P-Wave Velocity for Pyroclastic Rocks
NASA Astrophysics Data System (ADS)
Kahraman, Sair; Fener, Mustafa; Kilic, Cumhur Ozcan
2017-07-01
Seismic methods are widely used for the geotechnical investigations in volcanic areas or for the determination of the engineering properties of pyroclastic rocks in laboratory. Therefore, developing a relation between the wet- and dry-rock P-wave velocities will be helpful for engineers when evaluating the formation characteristics of pyroclastic rocks. To investigate the predictability of the wet-rock P-wave velocity from the dry-rock P-wave velocity for pyroclastic rocks P-wave velocity measurements were conducted on 27 different pyroclastic rocks. In addition, dry-rock S-wave velocity measurements were conducted. The test results were modeled using Gassmann's and Wood's theories and it was seen that estimates for saturated P-wave velocity from the theories fit well measured data. For samples having values of less and greater than 20%, practical equations were derived for reliably estimating wet-rock P-wave velocity as function of dry-rock P-wave velocity.
Sizing gaseous emboli using Doppler embolic signal intensity.
Banahan, Caroline; Hague, James P; Evans, David H; Patel, Rizwan; Ramnarine, Kumar V; Chung, Emma M L
2012-05-01
Extension of transcranial Doppler embolus detection to estimation of bubble size has historically been hindered by difficulties in applying scattering theory to the interpretation of clinical data. This article presents a simplified approach to the sizing of air emboli based on analysis of Doppler embolic signal intensity, by using an approximation to the full scattering theory that can be solved to estimate embolus size. Tests using simulated emboli show that our algorithm is theoretically capable of sizing 90% of "emboli" to within 10% of their true radius. In vitro tests show that 69% of emboli can be sized to within 20% of their true value under ideal conditions, which reduces to 30% of emboli if the beam and vessel are severely misaligned. Our results demonstrate that estimation of bubble size during clinical monitoring could be used to distinguish benign microbubbles from potentially harmful macrobubbles during intraoperative clinical monitoring. Copyright © 2012 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Estimating Fluctuating Pressures From Distorted Measurements
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Leondes, Cornelius T.
1994-01-01
Two algorithms extract estimates of time-dependent input (upstream) pressures from outputs of pressure sensors located at downstream ends of pneumatic tubes. Effect deconvolutions that account for distoring effects of tube upon pressure signal. Distortion of pressure measurements by pneumatic tubes also discussed in "Distortion of Pressure Signals in Pneumatic Tubes," (ARC-12868). Varying input pressure estimated from measured time-varying output pressure by one of two deconvolution algorithms that take account of measurement noise. Algorithms based on minimum-covariance (Kalman filtering) theory.
Volumetric calculations in an oil field: The basis method
Olea, R.A.; Pawlowsky, V.; Davis, J.C.
1993-01-01
The basis method for estimating oil reserves in place is compared to a traditional procedure that uses ordinary kriging. In the basis method, auxiliary variables that sum to the net thickness of pay are estimated by cokriging. In theory, the procedure should be more powerful because it makes full use of the cross-correlation between variables and forces the original variables to honor interval constraints. However, at least in our case study, the practical advantages of cokriging for estimating oil in place are marginal. ?? 1993.
An Estimation of the Logarithmic Timescale in Ergodic Dynamics
NASA Astrophysics Data System (ADS)
Gomez, Ignacio S.
An estimation of the logarithmic timescale in quantum systems having an ergodic dynamics in the semiclassical limit, is presented. The estimation is based on an extension of the Krieger’s finite generator theorem for discretized σ-algebras and using the time rescaling property of the Kolmogorov-Sinai entropy. The results are in agreement with those obtained in the literature but with a simpler mathematics and within the context of the ergodic theory. Moreover, some consequences of the Poincaré’s recurrence theorem are also explored.
Viking Landers and remote sensing
NASA Technical Reports Server (NTRS)
Moore, H. J.; Jakosky, B. M.; Christensen, P. R.
1987-01-01
Thermal and radar remote sensing signatures of the materials in the lander sample fields can be crudely estimated from evaluations of their physical-mechanical properties, laboratory data on thermal conductivities and dielectric constants, and theory. The estimated thermal inertias and dielectric constants of some of the materials in the sample field are close to modal values estimated from orbital and earth-based observations. This suggests that the mechanical properties of the surface materials of much of Mars will not be significantly different that those of the landing sites.
NASA Astrophysics Data System (ADS)
T, Morimoto; F, Yoshida; A, Yanagida; J, Yanagimoto
2015-04-01
First, hardening model in f.c.c. metals was formulated with collinear interactions slips, Hirth slips and Lomer-Cottrell slips. Using the Taylor and the Sachs rolling texture prediction model, the residual dislocation densities of cold-rolled commercial pure aluminum were estimated. Then, coincidence site lattice grains were investigated from observed cold rolling texture. Finally, on the basis of oriented nucleation theory and coincidence site lattice theory, the recrystallization texture of commercial pure aluminum after low-temperature annealing was predicted.
Vispoel, Walter P; Morris, Carrie A; Kilinc, Murat
2018-01-01
We applied a new approach to Generalizability theory (G-theory) involving parallel splits and repeated measures to evaluate common uses of the Paulhus Deception Scales based on polytomous and four types of dichotomous scoring. G-theory indices of reliability and validity accounting for specific-factor, transient, and random-response measurement error supported use of polytomous over dichotomous scores as contamination checks; as control, explanatory, and outcome variables; as aspects of construct validation; and as indexes of environmental effects on socially desirable responding. Polytomous scoring also provided results for flagging faking as dependable as those when using dichotomous scoring methods. These findings argue strongly against the nearly exclusive use of dichotomous scoring for the Paulhus Deception Scales in practice and underscore the value of G-theory in demonstrating this. We provide guidelines for applying our G-theory techniques to other objectively scored clinical assessments, for using G-theory to estimate how changes to a measure might improve reliability, and for obtaining software to conduct G-theory analyses free of charge.
Refinement of Timoshenko Beam Theory for Composite and Sandwich Beams Using Zigzag Kinematics
NASA Technical Reports Server (NTRS)
Tessler, Alexander; DiSciuva, Marco; Gherlone, Marco
2007-01-01
A new refined theory for laminated-composite and sandwich beams that contains the kinematics of the Timoshenko Beam Theory as a proper baseline subset is presented. This variationally consistent theory is derived from the virtual work principle and employs a novel piecewise linear zigzag function that provides a more realistic representation of the deformation states of transverse shear flexible beams than other similar theories. This new zigzag function is unique in that it vanishes at the top and bottom bounding surfaces of a beam. The formulation does not enforce continuity of the transverse shear stress across the beam s cross-section, yet is robust. Two major shortcomings that are inherent in the previous zigzag theories, shear-force inconsistency and difficulties in simulating clamped boundary conditions, and that have greatly limited the utility of these previous theories are discussed in detail. An approach that has successfully resolved these shortcomings is presented herein. This new theory can be readily extended to plate and shell structures, and should be useful for obtaining accurate estimates of structural response of laminated composites.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kunisch, K.
1982-01-01
Approximation results from linear semigroup theory are used to develop a general framework for convergence of approximation schemes in parameter estimation and optimal control problems for nonlinear partial differential equations. These ideas are used to establish theoretical convergence results for parameter identification using modal (eigenfunction) approximation techniques. Results from numerical investigations of these schemes for both hyperbolic and parabolic systems are given.
2004-07-01
five qualitative methods , each a potential candidate for conducting this study . Of the five methods listed, the grounded theory method fit this study ...Strauss and Corbin define the grounded theory approach as a qualitative research method that uses a systematic set of procedures to develop and... research question may also be used” (Leedy and Ormrod, 2001). The primary research method
A review of volume‐area scaling of glaciers
Bahr, David B.; Kaser, Georg
2015-01-01
Abstract Volume‐area power law scaling, one of a set of analytical scaling techniques based on principals of dimensional analysis, has become an increasingly important and widely used method for estimating the future response of the world's glaciers and ice caps to environmental change. Over 60 papers since 1988 have been published in the glaciological and environmental change literature containing applications of volume‐area scaling, mostly for the purpose of estimating total global glacier and ice cap volume and modeling future contributions to sea level rise from glaciers and ice caps. The application of the theory is not entirely straightforward, however, and many of the recently published results contain analyses that are in conflict with the theory as originally described by Bahr et al. (1997). In this review we describe the general theory of scaling for glaciers in full three‐dimensional detail without simplifications, including an improved derivation of both the volume‐area scaling exponent γ and a new derivation of the multiplicative scaling parameter c. We discuss some common misconceptions of the theory, presenting examples of both appropriate and inappropriate applications. We also discuss potential future developments in power law scaling beyond its present uses, the relationship between power law scaling and other modeling approaches, and some of the advantages and limitations of scaling techniques. PMID:27478877
Multistage point relascope and randomized branch sampling for downed coarse woody debris estimation
Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine
2002-01-01
New sampling methods have recently been introduced that allow estimation of downed coarse woody debris using an angle gauge, or relascope. The theory behind these methods is based on sampling straight pieces of downed coarse woody debris. When pieces deviate from this ideal situation, auxillary methods must be employed. We describe a two-stage procedure where the...
ERIC Educational Resources Information Center
Jones, Douglas H.
The progress of modern mental test theory depends very much on the techniques of maximum likelihood estimation, and many popular applications make use of likelihoods induced by logistic item response models. While, in reality, item responses are nonreplicate within a single examinee and the logistic models are only ideal, practitioners make…
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2010-01-01
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
ERIC Educational Resources Information Center
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
ERIC Educational Resources Information Center
Boskin, Michael J.
A model of occupational choice based on the theory of human capital is developed and estimated by conditional logit analysis. The empirical results estimated the probability of individuals with certain characteristics (such as race, sex, age, and education) entering each of 11 occupational groups. The results indicate that individuals tend to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweezy, Jeremy Ed
A photon next-event fluence estimator at a point has been implemented in the Monte Carlo Application Toolkit (MCATK). The next-event estimator provides an expected value estimator for the flux at a point due to all source and collision events. An advantage of the next-event estimator over track-length estimators, which are normally employed in MCATK, is that flux estimates can be made in locations that have no random walk particle tracks. The next-event estimator allows users to calculate radiographs and estimate response for detectors outside of the modeled geometry. The next-event estimator is not yet accessable through the MCATK FlatAPI formore » C and Fortran. The next-event estimator in MCATK has been tested against MCNP6 using 5 suites of test problems. No issues were found in the MCATK implementation. One issue was found in the exclusion radius approximation in MCNP6. The theory, implementation, and testing are described in this document.« less
Diffusion MRI noise mapping using random matrix theory
Veraart, Jelle; Fieremans, Els; Novikov, Dmitry S.
2016-01-01
Purpose To estimate the spatially varying noise map using a redundant magnitude MR series. Methods We exploit redundancy in non-Gaussian multi-directional diffusion MRI data by identifying its noise-only principal components, based on the theory of noisy covariance matrices. The bulk of PCA eigenvalues, arising due to noise, is described by the universal Marchenko-Pastur distribution, parameterized by the noise level. This allows us to estimate noise level in a local neighborhood based on the singular value decomposition of a matrix combining neighborhood voxels and diffusion directions. Results We present a model-independent local noise mapping method capable of estimating noise level down to about 1% error. In contrast to current state-of-the art techniques, the resultant noise maps do not show artifactual anatomical features that often reflect physiological noise, the presence of sharp edges, or a lack of adequate a priori knowledge of the expected form of MR signal. Conclusions Simulations and experiments show that typical diffusion MRI data exhibit sufficient redundancy that enables accurate, precise, and robust estimation of the local noise level by interpreting the PCA eigenspectrum in terms of the Marchenko-Pastur distribution. PMID:26599599
Probing Inflation Using Galaxy Clustering On Ultra-Large Scales
NASA Astrophysics Data System (ADS)
Dalal, Roohi; de Putter, Roland; Dore, Olivier
2018-01-01
A detailed understanding of curvature perturbations in the universe is necessary to constrain theories of inflation. In particular, measurements of the local non-gaussianity parameter, flocNL, enable us to distinguish between two broad classes of inflationary theories, single-field and multi-field inflation. While most single-field theories predict flocNL ≈ ‑5/12 (ns -1), in multi-field theories, flocNL is not constrained to this value and is allowed to be observably large. Achieving σ(flocNL) = 1 would give us discovery potential for detecting multi-field inflation, while finding flocNL=0 would rule out a good fraction of interesting multi-field models. We study the use of galaxy clustering on ultra-large scales to achieve this level of constraint on flocNL. Upcoming surveys such as Euclid and LSST will give us galaxy catalogs from which we can construct the galaxy power spectrum and hence infer a value of flocNL. We consider two possible methods of determining the galaxy power spectrum from a catalog of galaxy positions: the traditional Feldman Kaiser Peacock (FKP) Power Spectrum Estimator, and an Optimal Quadratic Estimator (OQE). We implemented and tested each method using mock galaxy catalogs, and compared the resulting constraints on flocNL. We find that the FKP estimator can measure flocNL in an unbiased way, but there remains room for improvement in its precision. We also find that the OQE is not computationally fast, but remains a promising option due to its ability to isolate the power spectrum at large scales. We plan to extend this research to study alternative methods, such as pixel-based likelihood functions. We also plan to study the impact of general relativistic effects at these scales on our ability to measure flocNL.
NASA Astrophysics Data System (ADS)
Yang, J.; Medlyn, B.; De Kauwe, M. G.; Duursma, R.
2017-12-01
Leaf Area Index (LAI) is a key variable in modelling terrestrial vegetation, because it has a major impact on carbon, water and energy fluxes. However, LAI is difficult to predict: several recent intercomparisons have shown that modelled LAI differs significantly among models, and between models and satellite-derived estimates. Empirical studies show that long-term mean LAI is strongly related to mean annual precipitation. This observation is predicted by the theory of ecohydrological equilibrium, which provides a promising alternative means to predict steady-state LAI. We implemented this theory in a simple optimisation model. We hypothesized that, when water availability is limited, plants should adjust long-term LAI and stomatal behavior (g1) to maximize net canopy carbon export, under the constraint that canopy transpiration is a fixed fraction of total precipitation. We evaluated the predicted LAI (Lopt) for Australia against ground-based observations of LAI at 135 sites, and continental-scale satellite-derived estimates. For the site-level data, the RMSE of predicted Lopt was 0.14 m2 m-2, which was similar to the RMSE of a comparison of the data against nine-year mean satellite-derived LAI at those sites. Continentally, Lopt had a R2 of over 70% when compared to satellite-derived LAI, which is comparable to the R2 obtained when different satellite products are compared against each other. The predicted response of Lopt to the increase in atmospheric CO2 over the last 30 years also agreed with the estimate based on satellite-derivatives. Our results indicate that long-term equilibrium LAI can be successfully predicted from a simple application of ecohydrological theory. We suggest that this theory could be usefully incorporated into terrestrial vegetation models to improve their predictions of LAI.
Examining corporate reputation judgments with generalizability theory.
Highhouse, Scott; Broadfoot, Alison; Yugo, Jennifer E; Devendorf, Shelba A
2009-05-01
The researchers used generalizability theory to examine whether reputation judgments about corporations function in a manner consistent with contemporary theory in the corporate-reputation literature. University professors (n = 86) of finance, marketing, and human resources management made repeated judgments about the general reputations of highly visible American companies. Minimal variability in the judgments is explained by items, time, persons, and field of specialization. Moreover, experts from the different specializations reveal considerable agreement in how they weigh different aspects of corporate performance in arriving at their global reputation judgments. The results generally support the theory of the reputation construct and suggest that stable estimates of global reputation can be achieved with a small number of items and experts. (c) 2009 APA, all rights reserved.
NASA Technical Reports Server (NTRS)
Hertzberg, M.
1971-01-01
Development of a combustion theory based on the laminarized solutions to the energy and flow conservation equations, which is more realistic in recognizing the nature of the heating-rate problem and in obtaining a practical solution to estimating its magnitude. A new experimental approach is used for studying the combustion behavior of pure monopropellants and composite propellants which uses a laser beam to supply additional heat feedback to a burning surface. New experimental data are presented for the laser-induced combustion rate and ignition delay of pure ammonium perchlorate. The pure monopropellant theory is generalized to include such nonadiabatic effects, and the new experimental data are in good agreement with the nonadiabatic theory.-
Douglas, Karen M; Sutton, Robbie M
2008-04-01
The authors examined the perceived and actual impact of exposure to conspiracy theories surrounding the death of Diana, Princess of Wales, in 1997. One group of undergraduate students rated their agreement and their classmates' perceived agreement with several statements about Diana's death. A second group of students from the same undergraduate population read material containing popular conspiracy theories about Diana's death before rating their own and others' agreement with the same statements and perceived retrospective attitudes (i.e., what they thought their own and others' attitudes were before reading the material). Results revealed that whereas participants in the second group accurately estimated others' attitude changes, they underestimated the extent to which their own attitudes were influenced.
The Biot coefficient for a low permeability heterogeneous limestone
NASA Astrophysics Data System (ADS)
Selvadurai, A. P. S.
2018-04-01
This paper presents the experimental and theoretical developments used to estimate the Biot coefficient for the heterogeneous Cobourg Limestone, which is characterized by its very low permeability. The coefficient forms an important component of the Biot poroelastic model that is used to examine coupled hydro-mechanical and thermo-hydro-mechanical processes in the fluid-saturated Cobourg Limestone. The constraints imposed by both the heterogeneous fabric and its extremely low intact permeability [K \\in (10^{-23},10^{-20}) m2 ] require the development of alternative approaches to estimate the Biot coefficient. Large specimen bench-scale triaxial tests (150 mm diameter and 300 mm long) that account for the scale of the heterogeneous fabric are complemented by results for the volume fraction-based mineralogical composition derived from XRD measurements. The compressibility of the solid phase is based on theoretical developments proposed in the mechanics of multi-phasic elastic materials. An appeal to the theory of multi-phasic elastic solids is the only feasible approach for examining the compressibility of the solid phase. The presence of a number of mineral species necessitates the use of the theories of Voigt, Reuss and Hill along with the theories proposed by Hashin and Shtrikman for developing bounds for the compressibility of the multi-phasic geologic material composing the skeletal fabric. The analytical estimates for the Biot coefficient for the Cobourg Limestone are compared with results for similar low permeability rocks reported in the literature.
Semiparametric Estimation of Treatment Effect in a Pretest–Posttest Study with Missing Data
Davidian, Marie; Tsiatis, Anastasios A.; Leon, Selene
2008-01-01
The pretest–posttest study is commonplace in numerous applications. Typically, subjects are randomized to two treatments, and response is measured at baseline, prior to intervention with the randomized treatment (pretest), and at prespecified follow-up time (posttest). Interest focuses on the effect of treatments on the change between mean baseline and follow-up response. Missing posttest response for some subjects is routine, and disregarding missing cases can lead to invalid inference. Despite the popularity of this design, a consensus on an appropriate analysis when no data are missing, let alone for taking into account missing follow-up, does not exist. Under a semiparametric perspective on the pretest–posttest model, in which limited distributional assumptions on pretest or posttest response are made, we show how the theory of Robins, Rotnitzky and Zhao may be used to characterize a class of consistent treatment effect estimators and to identify the efficient estimator in the class. We then describe how the theoretical results translate into practice. The development not only shows how a unified framework for inference in this setting emerges from the Robins, Rotnitzky and Zhao theory, but also provides a review and demonstration of the key aspects of this theory in a familiar context. The results are also relevant to the problem of comparing two treatment means with adjustment for baseline covariates. PMID:19081743
Semiparametric Estimation of Treatment Effect in a Pretest-Posttest Study with Missing Data.
Davidian, Marie; Tsiatis, Anastasios A; Leon, Selene
2005-08-01
The pretest-posttest study is commonplace in numerous applications. Typically, subjects are randomized to two treatments, and response is measured at baseline, prior to intervention with the randomized treatment (pretest), and at prespecified follow-up time (posttest). Interest focuses on the effect of treatments on the change between mean baseline and follow-up response. Missing posttest response for some subjects is routine, and disregarding missing cases can lead to invalid inference. Despite the popularity of this design, a consensus on an appropriate analysis when no data are missing, let alone for taking into account missing follow-up, does not exist. Under a semiparametric perspective on the pretest-posttest model, in which limited distributional assumptions on pretest or posttest response are made, we show how the theory of Robins, Rotnitzky and Zhao may be used to characterize a class of consistent treatment effect estimators and to identify the efficient estimator in the class. We then describe how the theoretical results translate into practice. The development not only shows how a unified framework for inference in this setting emerges from the Robins, Rotnitzky and Zhao theory, but also provides a review and demonstration of the key aspects of this theory in a familiar context. The results are also relevant to the problem of comparing two treatment means with adjustment for baseline covariates.
Information theoretic quantification of diagnostic uncertainty.
Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T
2012-01-01
Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.
A temporal basis for Weber's law in value perception.
Namboodiri, Vijay Mohan K; Mihalas, Stefan; Hussain Shuler, Marshall G
2014-01-01
Weber's law-the observation that the ability to perceive changes in magnitudes of stimuli is proportional to the magnitude-is a widely observed psychophysical phenomenon. It is also believed to underlie the perception of reward magnitudes and the passage of time. Since many ecological theories state that animals attempt to maximize reward rates, errors in the perception of reward magnitudes and delays must affect decision-making. Using an ecological theory of decision-making (TIMERR), we analyze the effect of multiple sources of noise (sensory noise, time estimation noise, and integration noise) on reward magnitude and subjective value perception. We show that the precision of reward magnitude perception is correlated with the precision of time perception and that Weber's law in time estimation can lead to Weber's law in value perception. The strength of this correlation is predicted to depend on the reward history of the animal. Subsequently, we show that sensory integration noise (either alone or in combination with time estimation noise) also leads to Weber's law in reward magnitude perception in an accumulator model, if it has balanced Poisson feedback. We then demonstrate that the noise in subjective value of a delayed reward, due to the combined effect of noise in both the perception of reward magnitude and delay, also abides by Weber's law. Thus, in our theory we prove analytically that the perception of reward magnitude, time, and subjective value change all approximately obey Weber's law.
Testing the gravitational instability hypothesis?
NASA Technical Reports Server (NTRS)
Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.
1994-01-01
We challenge a widely accepted assumption of observational cosmology: that successful reconstruction of observed galaxy density fields from measured galaxy velocity fields (or vice versa), using the methods of gravitational instability theory, implies that the observed large-scale structures and large-scale flows were produced by the action of gravity. This assumption is false, in that there exist nongravitational theories that pass the reconstruction tests and gravitational theories with certain forms of biased galaxy formation that fail them. Gravitational instability theory predicts specific correlations between large-scale velocity and mass density fields, but the same correlations arise in any model where (a) structures in the galaxy distribution grow from homogeneous initial conditions in a way that satisfies the continuity equation, and (b) the present-day velocity field is irrotational and proportional to the time-averaged velocity field. We demonstrate these assertions using analytical arguments and N-body simulations. If large-scale structure is formed by gravitational instability, then the ratio of the galaxy density contrast to the divergence of the velocity field yields an estimate of the density parameter Omega (or, more generally, an estimate of beta identically equal to Omega(exp 0.6)/b, where b is an assumed constant of proportionality between galaxy and mass density fluctuations. In nongravitational scenarios, the values of Omega or beta estimated in this way may fail to represent the true cosmological values. However, even if nongravitational forces initiate and shape the growth of structure, gravitationally induced accelerations can dominate the velocity field at late times, long after the action of any nongravitational impulses. The estimated beta approaches the true value in such cases, and in our numerical simulations the estimated beta values are reasonably accurate for both gravitational and nongravitational models. Reconstruction tests that show correlations between galaxy density and velocity fields can rule out some physically interesting models of large-scale structure. In particular, successful reconstructions constrain the nature of any bias between the galaxy and mass distributions, since processes that modulate the efficiency of galaxy formation on large scales in a way that violates the continuity equation also produce a mismatch between the observed galaxy density and the density inferred from the peculiar velocity field. We obtain successful reconstructions for a gravitational model with peaks biasing, but we also show examples of gravitational and nongravitational models that fail reconstruction tests because of more complicated modulations of galaxy formation.
Annual Review of Research Under the Joint Services Electronics Program.
1978-10-01
Electronic Science at Texas Tech University. Specific topics covered include fault analysis, Stochastic control and estimation, nonlinear control, multidimensional system theory , Optical noise, and pattern recognition.
Cross-bispectrum computation and variance estimation
NASA Technical Reports Server (NTRS)
Lii, K. S.; Helland, K. N.
1981-01-01
A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.