USDA-ARS?s Scientific Manuscript database
Alternatives to the in situ method for estimating rumen-degradable protein (RDP) in diverse forage legumes should be validated. In this study, RDP in roll conditioned or macerated silages and hays of Medicago, Lotus, and Trifolium species with differing polyphenol compositions were estimated from in...
Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok
2016-05-01
To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.
The friction cost method: a comment.
Johannesson, M; Karlsson, G
1997-04-01
The friction cost method has been proposed as an alternative to the human-capital approach of estimating indirect costs. We argue that the friction cost method is based on implausible assumptions not supported by neoclassical economic theory. Furthermore consistently applying the friction cost method would mean that the method should also be applied in the estimation of direct costs, which would mean that the costs of health care programmes are substantially decreased. It is concluded that the friction cost method does not seem to be a useful alternative to the human-capital approach in the estimation of indirect costs.
Method and apparatus for detecting cyber attacks on an alternating current power grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEachern, Alexander; Hofmann, Ronald
A method and apparatus for detecting cyber attacks on remotely-operable elements of an alternating current distribution grid. Two state estimates of the distribution grid are prepared, one of which uses micro-synchrophasors. A difference between the two state estimates indicates a possible cyber attack.
Estimating the Economic Impacts of Recreation Response to Resource Management Alternatives
Donald B.K. English; J. Michael Bowker; John C. Bergstrom; H. Ken Cordell
1995-01-01
Managing forest resources involves tradeoffs and making decisions among resource management alternatives. Some alternatives will lead to changes in the level of recreation visitation and the amount of associated visitor spending. Thus, the alternatives can affect local economies. This paper reports a method that can be used to estimate the economic impacts of such...
Alternative Strategies for Pricing Home Work Time.
ERIC Educational Resources Information Center
Zick, Cathleen D.; Bryant, W. Keith
1983-01-01
Discusses techniques for measuring the value of home work time. Estimates obtained using the reservation wage technique are contrasted with market alternative estimates derived with the same data set. Findings suggest that the market alternative cost method understates the true value of a woman's home time to the household. (JOW)
Dai, Huanping; Micheyl, Christophe
2012-11-01
Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.
Incorporating Alternative Care Site Characteristics Into Estimates of Substitutable ED Visits.
Trueger, Nathan Seth; Chua, Kao-Ping; Hussain, Aamir; Liferidge, Aisha T; Pitts, Stephen R; Pines, Jesse M
2017-07-01
Several recent efforts to improve health care value have focused on reducing emergency department (ED) visits that potentially could be treated in alternative care sites (ie, primary care offices, retail clinics, and urgent care centers). Estimates of the number of these visits may depend on assumptions regarding the operating hours and functional capabilities of alternative care sites. However, methods to account for the variability in these characteristics have not been developed. To develop methods to incorporate the variability in alternative care site characteristics into estimates of ED visit "substitutability." Our approach uses the range of hours and capabilities among alternative care sites to estimate lower and upper bounds of ED visit substitutability. We constructed "basic" and "extended" criteria that captured the plausible degree of variation in each site's hours and capabilities. To illustrate our approach, we analyzed data from 22,697 ED visits by adults in the 2011 National Hospital Ambulatory Medical Care Survey, defining a visit as substitutable if it was treat-and-release and met both the operating hours and functional capabilities criteria. Use of the combined basic hours/basic capabilities criteria and extended hours/extended capabilities generated lower and upper bounds of estimates. Our criteria classified 5.5%-27.1%, 7.6%-20.4%, and 10.6%-46.0% of visits as substitutable in primary care offices, retail clinics, and urgent care centers, respectively. Alternative care sites vary widely in operating hours and functional capabilities. Methods such as ours may help incorporate this variability into estimates of ED visit substitutability.
Counterinsurgency Aircraft Procurement Options: Processes, Methods, Alternatives, and Estimates
2009-08-01
4. TITLE AND SUBTITLE Counterinsurgency Aircraft Procurement Options: Processes, Methods, Alternatives, and Estimates 5a. CONTRACT NUMBER 5b...Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT A call is being made for an aircraft dedicated to the...counterinsurgency (COIN) mission within military academic circles and the special operations community. Support for a COIN aircraft needs hard numbers, given
Lee, Hyunyeol; Sohn, Chul-Ho; Park, Jaeseok
2017-07-01
To develop a current-induced, alternating reversed dual-echo-steady-state-based magnetic resonance electrical impedance tomography for joint estimation of tissue relaxation and electrical properties. The proposed method reverses the readout gradient configuration of conventional, in which steady-state-free-precession (SSFP)-ECHO is produced earlier than SSFP-free-induction-decay (FID) while alternating current pulses are applied in between the two SSFPs to secure high sensitivity of SSFP-FID to injection current. Additionally, alternating reversed dual-echo-steady-state signals are modulated by employing variable flip angles over two orthogonal injections of current pulses. Ratiometric signal models are analytically constructed, from which T 1 , T 2 , and current-induced B z are jointly estimated by solving a nonlinear inverse problem for conductivity reconstruction. Numerical simulations and experimental studies are performed to investigate the feasibility of the proposed method in estimating relaxation parameters and conductivity. The proposed method, if compared with conventional magnetic resonance electrical impedance tomography, enables rapid data acquisition and simultaneous estimation of T 1 , T 2 , and current-induced B z , yielding a comparable level of signal-to-noise ratio in the parameter estimates while retaining a relative conductivity contrast. We successfully demonstrated the feasibility of the proposed method in jointly estimating tissue relaxation parameters as well as conductivity distributions. It can be a promising, rapid imaging strategy for quantitative conductivity estimation. Magn Reson Med 78:107-120, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Westreich, Daniel; Lessler, Justin; Funk, Michele Jonsson
2010-01-01
Summary Objective Propensity scores for the analysis of observational data are typically estimated using logistic regression. Our objective in this Review was to assess machine learning alternatives to logistic regression which may accomplish the same goals but with fewer assumptions or greater accuracy. Study Design and Setting We identified alternative methods for propensity score estimation and/or classification from the public health, biostatistics, discrete mathematics, and computer science literature, and evaluated these algorithms for applicability to the problem of propensity score estimation, potential advantages over logistic regression, and ease of use. Results We identified four techniques as alternatives to logistic regression: neural networks, support vector machines, decision trees (CART), and meta-classifiers (in particular, boosting). Conclusion While the assumptions of logistic regression are well understood, those assumptions are frequently ignored. All four alternatives have advantages and disadvantages compared with logistic regression. Boosting (meta-classifiers) and to a lesser extent decision trees (particularly CART) appear to be most promising for use in the context of propensity score analysis, but extensive simulation studies are needed to establish their utility in practice. PMID:20630332
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Methods for the evaluation of alternative disaster warning systems. Executive summary
NASA Technical Reports Server (NTRS)
Agnew, C. E.; Anderson, R. J., Jr.; Lanen, W. N.
1977-01-01
Methods for estimating the economic costs and benefits of the transmission-reception and reception-action segments of a disaster warning system (DWS) are described. Methods were identified for the evaluation of the transmission and reception portions of alternative disaster warning systems. Example analyses using the methods identified were performed.
An Empirical Comparison of Heterogeneity Variance Estimators in 12,894 Meta-Analyses
ERIC Educational Resources Information Center
Langan, Dean; Higgins, Julian P. T.; Simmonds, Mark
2015-01-01
Heterogeneity in meta-analysis is most commonly estimated using a moment-based approach described by DerSimonian and Laird. However, this method has been shown to produce biased estimates. Alternative methods to estimate heterogeneity include the restricted maximum likelihood approach and those proposed by Paule and Mandel, Sidik and Jonkman, and…
On using sample selection methods in estimating the price elasticity of firms' demand for insurance.
Marquis, M Susan; Louis, Thomas A
2002-01-01
We evaluate a technique based on sample selection models that has been used by health economists to estimate the price elasticity of firms' demand for insurance. We demonstrate that, this technique produces inflated estimates of the price elasticity. We show that alternative methods lead to valid estimates.
ERIC Educational Resources Information Center
Retherford, Robert D.; Alam, Iqbal
Fertility trends estimated alternately from birth histories and own children method are compared for eight developing countries in which the World Fertility Survey was conducted. Principle hypotheses are that fertility trends estimated by the two approaches suffer from similar errors in the reporting of women's and children's ages, and that these…
Alternative Methods for Handling Attrition
Foster, E. Michael; Fang, Grace Y.
2009-01-01
Using data from the evaluation of the Fast Track intervention, this article illustrates three methods for handling attrition. Multiple imputation and ignorable maximum likelihood estimation produce estimates that are similar to those based on listwise-deleted data. A panel selection model that allows for selective dropout reveals that highly aggressive boys accumulate in the treatment group over time and produces a larger estimate of treatment effect. In contrast, this model produces a smaller treatment effect for girls. The article's conclusion discusses the strengths and weaknesses of the alternative approaches and outlines ways in which researchers might improve their handling of attrition. PMID:15358906
Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration
NASA Astrophysics Data System (ADS)
Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola
In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.
Use of three-point taper systems in timber cruising
James W. Flewelling; Richard L. Ernst; Lawrence M. Raynes
2000-01-01
Tree volumes and profiles are often estimated as functions of total height and DBH. Alternative estimators include form-class methods, importance sampling, the centroid method, and multi-point profile (taper) estimation systems; all of these require some measurement or estimate of upper stem diameters. The multi-point profile system discussed here allows for upper stem...
Estimating willingness to accept using paired comparison choice experiments: tests of robustness
David C. Kingsley; Thomas C. Brown
2013-01-01
Paired comparison (PC) choice experiments offer researchers and policy-makers an alternative nonmarket valuation method particularly apt when a ranking of the public's priorities across policy alternatives is paramount. Similar to contingent valuation, PC choice experiments estimate the total value associated with a specific environmental good or service. Similar...
ERIC Educational Resources Information Center
Burt, Martha R.
This report presents the results of a federally mandated study done to determine the best means of identifying, locating, and counting homeless children and youth, for the purpose of facilitating their successful participation in school and other educational activities. Several alternative approaches to obtaining consistent national estimates of…
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
On the rate of convergence of the alternating projection method in finite dimensional spaces
NASA Astrophysics Data System (ADS)
Galántai, A.
2005-10-01
Using the results of Smith, Solmon, and Wagner [K. Smith, D. Solomon, S. Wagner, Practical and mathematical aspects of the problem of reconstructing objects from radiographs, Bull. Amer. Math. Soc. 83 (1977) 1227-1270] and Nelson and Neumann [S. Nelson, M. Neumann, Generalizations of the projection method with application to SOR theory for Hermitian positive semidefinite linear systems, Numer. Math. 51 (1987) 123-141] we derive new estimates for the speed of the alternating projection method and its relaxed version in . These estimates can be computed in at most O(m3) arithmetic operations unlike the estimates in papers mentioned above that require spectral information. The new and old estimates are equivalent in many practical cases. In cases when the new estimates are weaker, the numerical testing indicates that they approximate the original bounds in papers mentioned above quite well.
Crown, William H
2014-02-01
This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.
Detecting isotopic ratio outliers
NASA Astrophysics Data System (ADS)
Bayne, C. K.; Smith, D. H.
An alternative method is proposed for improving isotopic ratio estimates. This method mathematically models pulse-count data and uses iterative reweighted Poisson regression to estimate model parameters to calculate the isotopic ratios. This computer-oriented approach provides theoretically better methods than conventional techniques to establish error limits and to identify outliers.
Density estimation in wildlife surveys
Jonathan Bart; Sam Droege; Paul Geissler; Bruce Peterjohn; C. John Ralph
2004-01-01
Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a costeffective component of valid...
Are Structural Estimates of Auction Models Reasonable? Evidence from Experimental Data
ERIC Educational Resources Information Center
Bajari, Patrick; Hortacsu, Ali
2005-01-01
Recently, economists have developed methods for structural estimation of auction models. Many researchers object to these methods because they find the strict rationality assumptions to be implausible. Using bid data from first-price auction experiments, we estimate four alternative structural models: (1) risk-neutral Bayes-Nash, (2) risk-averse…
An alternative empirical likelihood method in missing response problems and causal inference.
Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao
2016-11-30
Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Measuring landscape esthetics: the scenic beauty estimation method
Terry C. Daniel; Ron S. Boster
1976-01-01
The Scenic Beauty Estimation Method (SBE) provides quantitative measures of esthetic preferences for alternative wildland management systems. Extensive experimentation and testing with user, interest, and professional groups validated the method. SBE shows promise as an efficient and objective means for assessing the scenic beauty of public forests and wildlands, and...
Meta-analysis of Odds Ratios: Current Good Practices
Chang, Bei-Hung; Hoaglin, David C.
2016-01-01
Background Many systematic reviews of randomized clinical trials lead to meta-analyses of odds ratios. The customary methods of estimating an overall odds ratio involve weighted averages of the individual trials’ estimates of the logarithm of the odds ratio. That approach, however, has several shortcomings, arising from assumptions and approximations, that render the results unreliable. Although the problems have been documented in the literature for many years, the conventional methods persist in software and applications. A well-developed alternative approach avoids the approximations by working directly with the numbers of subjects and events in the arms of the individual trials. Objective We aim to raise awareness of methods that avoid the conventional approximations, can be applied with widely available software, and produce more-reliable results. Methods We summarize the fixed-effect and random-effects approaches to meta-analysis; describe conventional, approximate methods and alternative methods; apply the methods in a meta-analysis of 19 randomized trials of endoscopic sclerotherapy in patients with cirrhosis and esophagogastric varices; and compare the results. We demonstrate the use of SAS, Stata, and R software for the analysis. Results In the example, point estimates and confidence intervals for the overall log-odds-ratio differ between the conventional and alternative methods, in ways that can affect inferences. Programming is straightforward in the three software packages; an appendix gives the details. Conclusions The modest additional programming required should not be an obstacle to adoption of the alternative methods. Because their results are unreliable, use of the conventional methods for meta-analysis of odds ratios should be discontinued. PMID:28169977
Mendez, Michelle A.; Popkin, Barry M.; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R.; Sánchez, María-José; González, Carlos A
2011-01-01
Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29–65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = −0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes. PMID:21242302
Mendez, Michelle A; Popkin, Barry M; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R; Sánchez, María-José; González, Carlos A
2011-02-15
Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29-65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = -0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes.
Optimizing efficiency of height modeling for extensive forest inventories.
T.M. Barrett
2006-01-01
Although critical to monitoring forest ecosystems, inventories are expensive. This paper presents a generalizable method for using an integer programming model to examine tradeoffs between cost and estimation error for alternative measurement strategies in forest inventories. The method is applied to an example problem of choosing alternative height-modeling strategies...
NASA Astrophysics Data System (ADS)
Gillam, Thomas P. S.; Lester, Christopher G.
2014-11-01
We consider current and alternative approaches to setting limits on new physics signals having backgrounds from misidentified objects; for example jets misidentified as leptons, b-jets or photons. Many ATLAS and CMS analyses have used a heuristic "matrix method" for estimating the background contribution from such sources. We demonstrate that the matrix method suffers from statistical shortcomings that can adversely affect its ability to set robust limits. A rigorous alternative method is discussed, and is seen to produce fake rate estimates and limits with better qualities, but is found to be too costly to use. Having investigated the nature of the approximations used to derive the matrix method, we propose a third strategy that is seen to marry the speed of the matrix method to the performance and physicality of the more rigorous approach.
Methods for Cloud Cover Estimation
NASA Technical Reports Server (NTRS)
Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.
1984-01-01
Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.
Alternative prediction methods of protein and energy evaluation of pig feeds.
Święch, Ewa
2017-01-01
Precise knowledge of the actual nutritional value of individual feedstuffs and complete diets for pigs is important for efficient livestock production. Methods of assessment of protein and energy values in pig feeds have been briefly described. In vivo determination of protein and energy values of feeds in pigs are time-consuming, expensive and very often require the use of surgically-modified animals. There is a need for more simple, rapid, inexpensive and reproducible methods for routine feed evaluation. Protein and energy values of pig feeds can be estimated using the following alternative methods: 1) prediction equations based on chemical composition; 2) animal models as rats, cockerels and growing pigs for adult animals; 3) rapid methods, such as the mobile nylon bag technique and in vitro methods. Alternative methods developed for predicting the total tract and ileal digestibility of nutrients including amino acids in feedstuffs and diets for pigs have been reviewed. This article focuses on two in vitro methods that can be used for the routine evaluation of amino acid ileal digestibility and energy value of pig feeds and on factors affecting digestibility determined in vivo in pigs and by alternative methods. Validation of alternative methods has been carried out by comparing the results obtained using these methods with those acquired in vivo in pigs. In conclusion, energy and protein values of pig feeds may be estimated with satisfactory precision in rats and by the two- or three-step in vitro methods providing equations for the calculation of standardized ileal digestibility of amino acids and metabolizable energy content. The use of alternative methods of feed evaluation is an important way for reduction of stressful animal experiments.
ERIC Educational Resources Information Center
Green, Samuel B.; Yang, Yanyun
2009-01-01
A method is presented for estimating reliability using structural equation modeling (SEM) that allows for nonlinearity between factors and item scores. Assuming the focus is on consistency of summed item scores, this method for estimating reliability is preferred to those based on linear SEM models and to the most commonly reported estimate of…
Alvarez, Isaac; de la Torre, Angel; Sainz, Manuel; Roldan, Cristina; Schoesser, Hansjoerg; Spitzer, Philipp
2007-09-15
Stimulus artifact is one of the main limitations when considering electrically evoked compound action potential for clinical applications. Alternating stimulation (average of recordings obtained with anodic-cathodic and cathodic-anodic bipolar stimulation pulses) is an effective method to reduce stimulus artifact when evoked potentials are recorded. In this paper we extend the concept of alternating stimulation by combining anodic-cathodic and cathodic-anodic recordings with a weight in general different to 0.5. We also provide an automatic method to obtain an estimation of the optimal weights. Comparison with conventional alternating, triphasic stimulation and masker-probe paradigm shows that the generalized alternating method improves the quality of electrically evoked compound action potential responses.
Kuz'mina, N E; Iashkir, V A; Merkulov, V A; Osipova, E S
2012-01-01
Created by means alternative strategy of structural similarity search universal three-dimensional model of the nonselective opiate pharmacophore and the estimation method of agonistic and antagonistic properties of opiate receptors ligands based on its were described. The examples of the present method use are given for opiate activity estimation of compounds essentially distinguished on the structure from opiates and traditional opioids.
Couturier, Jean‐Luc; Kokossis, Antonis; Dubois, Jean‐Luc
2016-01-01
Abstract Biorefineries offer a promising alternative to fossil‐based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital‐intensive projects that involve state‐of‐the‐art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well‐documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early‐stage capital cost estimation tool suitable for biorefinery processes. PMID:27484398
Ever Enrolled Medicare Population Estimates from the MCBS Access to Care Files
Petroski, Jason; Ferraro, David; Chu, Adam
2014-01-01
Objective The Medicare Current Beneficiary Survey’s (MCBS) Access to Care (ATC) file is designed to provide timely access to information on the Medicare population, yet because of the survey’s complex sampling design and expedited processing it is difficult to use the file to make both “always-enrolled” and “ever-enrolled” estimates on the Medicare population. In this study, we describe the ATC file and sample design, and we evaluate and review various alternatives for producing “ever-enrolled” estimates. Methods We created “ever enrolled” estimates for key variables in the MCBS using three separate approaches. We tested differences between the alternative approaches for statistical significance and show the relative magnitude of difference between approaches. Results Even when estimates derived from the different approaches were statistically different, the magnitude of the difference was often sufficiently small so as to result in little practical difference among the alternate approaches. However, when considering more than just the estimation method, there are advantages to using certain approaches over others. Conclusion There are several plausible approaches to achieving “ever-enrolled” estimates in the MCBS ATC file; however, the most straightforward approach appears to be implementation and usage of a new set of “ever-enrolled” weights for this file. PMID:24991484
Coeli M. Hoover; Mark J. Ducey; R. Andy Colter; Mariko Yamasaki
2018-01-01
There is growing interest in estimating and mapping biomass and carbon content of forests across large landscapes. LiDAR-based inventory methods are increasingly common and have been successfully implemented in multiple forest types. Asner et al. (2011) developed a simple universal forest carbon estimation method for tropical forests that reduces the amount of required...
Brian Stone; William Obermann; Stephanie Snyder
2005-01-01
Outlines new methods for estimating vehicle miles of travel (VMT) under current demographic and land use conditions and projecting VMT under alternative future conditions. Reports on the role that VMT estimates play in evaluating how changing land use patterns and demographics may ultimately affect regional air quality and forest health.
ALTERNATIVE APPROACH TO ESTIMATING CANCER ...
The alternative approach for estimating cancer potency from inhalation exposure to asbestos seeks to improve the methods developed by USEPA (1986). This efforts seeks to modify the the current approach for estimating cancer potency for lung cancer and mesothelioma to account for the current scientific consensus that cancer risk from asbestos depends both on mineral type and on particle size distribution. In brief, epidemiological exposure-response data for lung cancer and mesothelioma in asbestos workers are combined with estimates of the mineral type(s) and partical size distribution at each exposure location in order to estimate potency factors that are specific to a selected set of mineral type and size
Estimation of toxicity using the Toxicity Estimation Software Tool (TEST)
Tens of thousands of chemicals are currently in commerce, and hundreds more are introduced every year. Since experimental measurements of toxicity are extremely time consuming and expensive, it is imperative that alternative methods to estimate toxicity are developed.
Comparison of Field Methods and Models to Estimate Mean Crown Diameter
William A. Bechtold; Manfred E. Mielke; Stanley J. Zarnoch
2002-01-01
The direct measurement of crown diameters with logger's tapes adds significantly to the cost of extensive forest inventories. We undertook a study of 100 trees to compare this measurement method to four alternatives-two field instruments, ocular estimates, and regression models. Using the taping method as the standard of comparison, accuracy of the tested...
Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data
CHEN, SHUAI; ZHAO, HONGWEI
2013-01-01
Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869
College Quality and Early Adult Outcomes
ERIC Educational Resources Information Center
Long, Mark C.
2008-01-01
This paper estimates the effects of various college qualities on several early adult outcomes, using panel data from the National Education Longitudinal Study. I compare the results using ordinary least squares with three alternative methods of estimation, including instrumental variables, and the methods used by Dale and Krueger [(2002).…
Gallart, F; Llorens, P; Latron, J; Cid, N; Rieradevall, M; Prat, N
2016-09-15
Hydrological data for assessing the regime of temporary rivers are often non-existent or scarce. The scarcity of flow data makes impossible to characterize the hydrological regime of temporary streams and, in consequence, to select the correct periods and methods to determine their ecological status. This is why the TREHS software is being developed, in the framework of the LIFE Trivers project. It will help managers to implement adequately the European Water Framework Directive in this kind of water body. TREHS, using the methodology described in Gallart et al. (2012), defines six transient 'aquatic states', based on hydrological conditions representing different mesohabitats, for a given reach at a particular moment. Because of its qualitative nature, this approach allows using alternative methodologies to assess the regime of temporary rivers when there are no observed flow data. These methods, based on interviews and high-resolution aerial photographs, were tested for estimating the aquatic regime of temporary rivers. All the gauging stations (13) belonging to the Catalan Internal Catchments (NE Spain) with recurrent zero-flow periods were selected to validate this methodology. On the one hand, non-structured interviews were conducted with inhabitants of villages near the gauging stations. On the other hand, the historical series of available orthophotographs were examined. Flow records measured at the gauging stations were used to validate the alternative methods. Flow permanence in the reaches was estimated reasonably by the interviews and adequately by aerial photographs, when compared with the values estimated using daily flows. The degree of seasonality was assessed only roughly by the interviews. The recurrence of disconnected pools was not detected by flow records but was estimated with some divergences by the two methods. The combination of the two alternative methods allows substituting or complementing flow records, to be updated in the future through monitoring by professionals and citizens. Copyright © 2016 Elsevier B.V. All rights reserved.
Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe
2012-01-01
In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608
Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe
2012-01-01
In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.
Improved alternatives for estimating in-use material stocks.
Chen, Wei-Qiang; Graedel, T E
2015-03-03
Determinations of in-use material stocks are useful for exploring past patterns and future scenarios of materials use, for estimating end-of-life flows of materials, and thereby for guiding policies on recycling and sustainable management of materials. This is especially true when those determinations are conducted for individual products or product groups such as "automobiles" rather than general (and sometimes nebulous) sectors such as "transportation". We propose four alternatives to the existing top-down and bottom-up methods for estimating in-use material stocks, with the choice depending on the focus of the study and on the available data. We illustrate with aluminum use in automobiles the robustness of and consistencies and differences among these four alternatives and demonstrate that a suitable combination of the four methods permits estimation of the in-use stock of a material contained in all products employing that material, or in-use stocks of different materials contained in a particular product. Therefore, we anticipate the estimation in the future of in-use stocks for many materials in many products or product groups, for many regions, and for longer time periods, by taking advantage of methodologies that fully employ the detailed data sets now becoming available.
Estimating Demand for Alternatives to Cigarettes With Online Purchase Tasks
O’Connor, Richard J.; June, Kristie M.; Bansal-Travers, Maansi; Rousu, Matthew C.; Thrasher, James F.; Hyland, Andrew; Cummings, K. Michael
2013-01-01
Objectives This study explored how advertising affects demand for cigarettes and potential substitutes, including snus, dissolvable tobacco, and medicinal nicotine. Methods A web-based experiment randomized 1062 smokers to see advertisements for alternative nicotine products or soft drinks, then complete a series of purchase tasks, which were used to estimate demand elasticity, peak consumption, and cross-price elasticity (CPE) for tobacco products. Results Lower demand elasticity and greater peak consumption were seen for cigarettes compared to all alternative products (p < .05). CPE did not differ across the alternative products (p ≤ .03). Seeing relevant advertisements was not significantly related to demand. Conclusions These findings suggest significantly lower demand for alternative nicotine sources among smokers than previously revealed. PMID:24034685
NASA Astrophysics Data System (ADS)
Kasim, Maznah Mat; Abdullah, Siti Rohana Goh
2014-07-01
Many average methods are available to aggregate a set of numbers to become single number. However these methods do not consider the interdependencies between the criteria of the related numbers. This paper is highlighting the Choquet Integral method as an alternative aggregation method where the interdependency estimates between the criteria are comprised in the aggregation process. The interdependency values can be estimated by using lambda fuzzy measure method. By considering the interdependencies or interaction between the criteria, the resulted aggregated values are more meaningful as compared to the ones obtained by normal average methods. The application of the Choquet Integral is illustrated in a case study of finding the overall academic achievement of year six pupils in a selected primary school in a northern state of Malaysia.
NASA Astrophysics Data System (ADS)
Martsynkovskyy, V.; Kirik, G.; Tarelnyk, V.; Zharkov, P.; Konoplianchenko, Ie; Dovzhyk, M.
2017-08-01
There are represented the results of influence of the surface plastic deformation (SPD) methods, namely, diamond smoothing (DS) and ball-rolling surface roughness generation (BSRG) ones on the qualitative parameters (residual stresses, fatigue strength and wear resistance values) of the steel substrate surface layers formed by the electroerosive alloying (EEA) method. There are proposed the most rational methods of deformation and also the composition for electroerosive coatings providing the presence of the favorable residual compressive stresses in the surface layer, increasing fatigue strength and wear resistance values. There are stated the criteria for estimating the alternative variants of the combined technologies and choosing the most rational ones thereof.
Bojmehrani, Azadeh; Bergeron-Duchesne, Maude; Bouchard, Carmelle; Simard, Serge; Bouchard, Pierre-Alexandre; Vanderschuren, Abel; L'Her, Erwan; Lellouche, François
2014-07-01
Protective ventilation implementation requires the calculation of predicted body weight (PBW), determined by a formula based on gender and height. Consequently, height inaccuracy may be a limiting factor to correctly set tidal volumes. The objective of this study was to evaluate the accuracy of different methods in measuring heights in mechanically ventilated patients. Before cardiac surgery, actual height was measured with a height gauge while subjects were standing upright (reference method); the height was also estimated by alternative methods based on lower leg and forearm measurements. After cardiac surgery, upon ICU admission, a subject's height was visually estimated by a clinician and then measured with a tape measure while the subject was supine and undergoing mechanical ventilation. One hundred subjects (75 men, 25 women) were prospectively included. Mean PBW was 61.0 ± 9.7 kg, and mean actual weight was 30.3% higher. In comparison with the reference method, estimating the height visually and using the tape measure were less accurate than both lower leg and forearm measurements. Errors above 10% in calculating the PBW were present in 25 and 40 subjects when the tape measure or visual estimation of height was used in the formula, respectively. With lower leg and forearm measurements, 15 subjects had errors above 10% (P < .001). Our results demonstrate that significant variability exists between the different methods used to measure height in bedridden patients on mechanical ventilation. Alternative methods based on lower leg and forearm measurements are potentially interesting solutions to facilitate the accurate application of protective ventilation. Copyright © 2014 by Daedalus Enterprises.
Developing of method for primary frequency control droop and deadband actual values estimation
NASA Astrophysics Data System (ADS)
Nikiforov, A. A.; Chaplin, A. G.
2017-11-01
Operation of thermal power plant generation equipment, which participates in standardized primary frequency control (SPFC), must meet specific requirements. These requirements are formalized as nine algorithmic criteria, which are used for automatic monitoring of power plant participation in SPFC. One of these criteria - primary frequency control droop and deadband actual values estimation is considered in detail in this report. Experience shows that existing estimation method sometimes doesn’t work properly. Author offers alternative method, which allows estimating droop and deadband actual values more accurately. This method was implemented as a software application.
ERIC Educational Resources Information Center
Coskuntuncel, Orkun
2013-01-01
The purpose of this study is two-fold; the first aim being to show the effect of outliers on the widely used least squares regression estimator in social sciences. The second aim is to compare the classical method of least squares with the robust M-estimator using the "determination of coefficient" (R[superscript 2]). For this purpose,…
Baker, Ronald J.; Chepiga, Mary M.; Cauller, Stephen J.
2015-01-01
The Kaplan-Meier method of estimating summary statistics from left-censored data was applied in order to include nondetects (left-censored data) in median nitrate-concentration calculations. Median concentrations also were determined using three alternative methods of handling nondetects. Treatment of the 23 percent of samples that were nondetects had little effect on estimated median nitrate concentrations because method detection limits were mostly less than median values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balsa Terzic, Gabriele Bassi
In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less
Sparse Covariance Matrix Estimation With Eigenvalue Constraints
LIU, Han; WANG, Lie; ZHAO, Tuo
2014-01-01
We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866
Estimating botanical composition by the dry-weight-rank method in California's annual grasslands
Raymond D. Ratliff; William E. Frost
1990-01-01
The dry-weight-rank method of estimating botanical composition on California's annual grasslands is a viable alternative to harvesting and sorting or methods using points. Two data sets of sorted species weights were available. One spanned nine years with quadrats harvested at peak of production. The second spanned one growing season with 20 harvest dates. Two...
Schoenecker, Kathryn A.; Lubow, Bruce C.
2016-01-01
Accurately estimating the size of wildlife populations is critical to wildlife management and conservation of species. Raw counts or “minimum counts” are still used as a basis for wildlife management decisions. Uncorrected raw counts are not only negatively biased due to failure to account for undetected animals, but also provide no estimate of precision on which to judge the utility of counts. We applied a hybrid population estimation technique that combined sightability modeling, radio collar-based mark-resight, and simultaneous double count (double-observer) modeling to estimate the population size of elk in a high elevation desert ecosystem. Combining several models maximizes the strengths of each individual model while minimizing their singular weaknesses. We collected data with aerial helicopter surveys of the elk population in the San Luis Valley and adjacent mountains in Colorado State, USA in 2005 and 2007. We present estimates from 7 alternative analyses: 3 based on different methods for obtaining a raw count and 4 based on different statistical models to correct for sighting probability bias. The most reliable of these approaches is a hybrid double-observer sightability model (model MH), which uses detection patterns of 2 independent observers in a helicopter plus telemetry-based detections of radio collared elk groups. Data were fit to customized mark-resight models with individual sighting covariates. Error estimates were obtained by a bootstrapping procedure. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to double-observer modeling. The resulting population estimate corrected for multiple sources of undercount bias that, if left uncorrected, would have underestimated the true population size by as much as 22.9%. Our comparison of these alternative methods demonstrates how various components of our method contribute to improving the final estimate and demonstrates why each is necessary.
A comparison of sap flux-based evapotranspiration estimates with catchment-scale water balance
Chelcy R. Ford; Robert M. Hubbard; Brian D. Kloeppel; James M. Vose
2007-01-01
Many researchers are using sap flux to estimate tree-level transpiration, and to scale to stand- and catchment-level transpiration; yet studies evaluating the comparability of sap flux-based estimates of transpiration (E) with alternative methods for estimating Et at this spatial scale are rare. Our ability to...
Zhou, Xiang
2017-12-01
Linear mixed models (LMMs) are among the most commonly used tools for genetic association studies. However, the standard method for estimating variance components in LMMs-the restricted maximum likelihood estimation method (REML)-suffers from several important drawbacks: REML requires individual-level genotypes and phenotypes from all samples in the study, is computationally slow, and produces downward-biased estimates in case control studies. To remedy these drawbacks, we present an alternative framework for variance component estimation, which we refer to as MQS. MQS is based on the method of moments (MoM) and the minimal norm quadratic unbiased estimation (MINQUE) criterion, and brings two seemingly unrelated methods-the renowned Haseman-Elston (HE) regression and the recent LD score regression (LDSC)-into the same unified statistical framework. With this new framework, we provide an alternative but mathematically equivalent form of HE that allows for the use of summary statistics. We provide an exact estimation form of LDSC to yield unbiased and statistically more efficient estimates. A key feature of our method is its ability to pair marginal z -scores computed using all samples with SNP correlation information computed using a small random subset of individuals (or individuals from a proper reference panel), while capable of producing estimates that can be almost as accurate as if both quantities are computed using the full data. As a result, our method produces unbiased and statistically efficient estimates, and makes use of summary statistics, while it is computationally efficient for large data sets. Using simulations and applications to 37 phenotypes from 8 real data sets, we illustrate the benefits of our method for estimating and partitioning SNP heritability in population studies as well as for heritability estimation in family studies. Our method is implemented in the GEMMA software package, freely available at www.xzlab.org/software.html.
Tsagkari, Mirela; Couturier, Jean-Luc; Kokossis, Antonis; Dubois, Jean-Luc
2016-09-08
Biorefineries offer a promising alternative to fossil-based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital-intensive projects that involve state-of-the-art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well-documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early-stage capital cost estimation tool suitable for biorefinery processes. © 2015 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Inter-Industry Wage Differentials and the Gender Wage Gap: An Identification Problem.
ERIC Educational Resources Information Center
Horrace, William C.; Oaxaca, Ronald L.
2001-01-01
States that a method for estimating gender wage gaps by industry yields estimates that vary according to arbitrary choice of omitted reference groups. Suggests alternative methods not susceptible to this problem that can be applied to other contexts, such as racial, union/nonunion, and immigrant/native wage differences. (SK)
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Estimating equations estimates of trends
Link, W.A.; Sauer, J.R.
1994-01-01
The North American Breeding Bird Survey monitors changes in bird populations through time using annual counts at fixed survey sites. The usual method of estimating trends has been to use the logarithm of the counts in a regression analysis. It is contended that this procedure is reasonably satisfactory for more abundant species, but produces biased estimates for less abundant species. An alternative estimation procedure based on estimating equations is presented.
Yelland, Lisa N; Salter, Amy B; Ryan, Philip
2011-10-15
Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.
ESTIMATING TREATMENT EFFECTS ON HEALTHCARE COSTS UNDER EXOGENEITY: IS THERE A ‘MAGIC BULLET’?
Polsky, Daniel; Manning, Willard G.
2011-01-01
Methods for estimating average treatment effects, under the assumption of no unmeasured confounders, include regression models; propensity score adjustments using stratification, weighting, or matching; and doubly robust estimators (a combination of both). Researchers continue to debate about the best estimator for outcomes such as health care cost data, as they are usually characterized by an asymmetric distribution and heterogeneous treatment effects,. Challenges in finding the right specifications for regression models are well documented in the literature. Propensity score estimators are proposed as alternatives to overcoming these challenges. Using simulations, we find that in moderate size samples (n= 5000), balancing on propensity scores that are estimated from saturated specifications can balance the covariate means across treatment arms but fails to balance higher-order moments and covariances amongst covariates. Therefore, unlike regression model, even if a formal model for outcomes is not required, propensity score estimators can be inefficient at best and biased at worst for health care cost data. Our simulation study, designed to take a ‘proof by contradiction’ approach, proves that no one estimator can be considered the best under all data generating processes for outcomes such as costs. The inverse-propensity weighted estimator is most likely to be unbiased under alternate data generating processes but is prone to bias under misspecification of the propensity score model and is inefficient compared to an unbiased regression estimator. Our results show that there are no ‘magic bullets’ when it comes to estimating treatment effects in health care costs. Care should be taken before naively applying any one estimator to estimate average treatment effects in these data. We illustrate the performance of alternative methods in a cost dataset on breast cancer treatment. PMID:22199462
An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.
1994-01-01
Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.
NASA Astrophysics Data System (ADS)
Wani, Omar; Beckers, Joost V. L.; Weerts, Albrecht H.; Solomatine, Dimitri P.
2017-08-01
A non-parametric method is applied to quantify residual uncertainty in hydrologic streamflow forecasting. This method acts as a post-processor on deterministic model forecasts and generates a residual uncertainty distribution. Based on instance-based learning, it uses a k nearest-neighbour search for similar historical hydrometeorological conditions to determine uncertainty intervals from a set of historical errors, i.e. discrepancies between past forecast and observation. The performance of this method is assessed using test cases of hydrologic forecasting in two UK rivers: the Severn and Brue. Forecasts in retrospect were made and their uncertainties were estimated using kNN resampling and two alternative uncertainty estimators: quantile regression (QR) and uncertainty estimation based on local errors and clustering (UNEEC). Results show that kNN uncertainty estimation produces accurate and narrow uncertainty intervals with good probability coverage. Analysis also shows that the performance of this technique depends on the choice of search space. Nevertheless, the accuracy and reliability of uncertainty intervals generated using kNN resampling are at least comparable to those produced by QR and UNEEC. It is concluded that kNN uncertainty estimation is an interesting alternative to other post-processors, like QR and UNEEC, for estimating forecast uncertainty. Apart from its concept being simple and well understood, an advantage of this method is that it is relatively easy to implement.
NASA Astrophysics Data System (ADS)
Terzić, Balša; Bassi, Gabriele
2011-07-01
In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009); PRABFM1098-440210.1103/PhysRevSTAB.12.080704G. Bassi and B. Terzić, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043], designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code [G. Bassi, J. A. Ellison, K. Heinemann, and R. Warnock, Phys. Rev. ST Accel. Beams 12, 080704 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.080704], and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
LaManna, Joseph A; Mangan, Scott A; Alonso, Alfonso; Bourg, Norman A; Brockelman, Warren Y; Bunyavejchewin, Sarayudh; Chang, Li-Wan; Chiang, Jyh-Min; Chuyong, George B; Clay, Keith; Cordell, Susan; Davies, Stuart J; Furniss, Tucker J; Giardina, Christian P; Gunatilleke, I A U Nimal; Gunatilleke, C V Savitri; He, Fangliang; Howe, Robert W; Hubbell, Stephen P; Hsieh, Chang-Fu; Inman-Narahari, Faith M; Janík, David; Johnson, Daniel J; Kenfack, David; Korte, Lisa; Král, Kamil; Larson, Andrew J; Lutz, James A; McMahon, Sean M; McShea, William J; Memiaghe, Hervé R; Nathalang, Anuttara; Novotny, Vojtech; Ong, Perry S; Orwig, David A; Ostertag, Rebecca; Parker, Geoffrey G; Phillips, Richard P; Sack, Lawren; Sun, I-Fang; Tello, J Sebastián; Thomas, Duncan W; Turner, Benjamin L; Vela Díaz, Dilys M; Vrška, Tomáš; Weiblen, George D; Wolf, Amy; Yap, Sandra; Myers, Jonathan A
2018-05-25
Chisholm and Fung claim that our method of estimating conspecific negative density dependence (CNDD) in recruitment is systematically biased, and present an alternative method that shows no latitudinal pattern in CNDD. We demonstrate that their approach produces strongly biased estimates of CNDD, explaining why they do not detect a latitudinal pattern. We also address their methodological concerns using an alternative distance-weighted approach, which supports our original findings of a latitudinal gradient in CNDD and a latitudinal shift in the relationship between CNDD and species abundance. Copyright © 2018, American Association for the Advancement of Science.
ERIC Educational Resources Information Center
Nevitt, Johnathan; Hancock, Gregory R.
Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1982-01-01
A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.
Westreich, Daniel; Lessler, Justin; Funk, Michele Jonsson
2010-08-01
Propensity scores for the analysis of observational data are typically estimated using logistic regression. Our objective in this review was to assess machine learning alternatives to logistic regression, which may accomplish the same goals but with fewer assumptions or greater accuracy. We identified alternative methods for propensity score estimation and/or classification from the public health, biostatistics, discrete mathematics, and computer science literature, and evaluated these algorithms for applicability to the problem of propensity score estimation, potential advantages over logistic regression, and ease of use. We identified four techniques as alternatives to logistic regression: neural networks, support vector machines, decision trees (classification and regression trees [CART]), and meta-classifiers (in particular, boosting). Although the assumptions of logistic regression are well understood, those assumptions are frequently ignored. All four alternatives have advantages and disadvantages compared with logistic regression. Boosting (meta-classifiers) and, to a lesser extent, decision trees (particularly CART), appear to be most promising for use in the context of propensity score analysis, but extensive simulation studies are needed to establish their utility in practice. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Peng, Lingling; Li, Yi; Feng, Hao
2017-07-14
Reference crop evapotranspiration (ET o ) is a critically important parameter for climatological, hydrological and agricultural management. The FAO56 Penman-Monteith (PM) equation has been recommended as the standardized ET o (ET o,s ) equation, but it has a high requirements of climatic data. There is a practical need for finding a best alternative method to estimate ET o in the regions where full climatic data are lacking. A comprehensive comparison for the spatiotemporal variations, relative errors, standard deviations and Nash-Sutcliffe efficacy coefficients of monthly or annual ET o,s and ET o,i (i = 1, 2, …, 10) values estimated by 10 selected methods (i.e., Irmak et al., Makkink, Priestley-Taylor, Hargreaves-Samani, Droogers-Allen, Berti et al., Doorenbos-Pruitt, Wright and Valiantzas, respectively) using data at 552 sites over 1961-2013 in mainland China. The method proposed by Berti et al. (2014) was selected as the best alternative of FAO56-PM because it was simple in computation process, only utilized temperature data, had generally good accuracy in describing spatiotemporal characteristics of ET o,s in different sub-regions and mainland China, and correlated linearly to the FAO56-PM method very well. The parameters of the linear correlations between ET o of the two methods are calibrated for each site with the smallest determination of coefficient being 0.87.
Reich, Christian G; Ryan, Patrick B; Schuemie, Martijn J
2013-10-01
A systematic risk identification system has the potential to test marketed drugs for important Health Outcomes of Interest or HOI. For each HOI, multiple definitions are used in the literature, and some of them are validated for certain databases. However, little is known about the effect of different definitions on the ability of methods to estimate their association with medical products. Alternative definitions of HOI were studied for their effect on the performance of analytical methods in observational outcome studies. A set of alternative definitions for three HOI were defined based on literature review and clinical diagnosis guidelines: acute kidney injury, acute liver injury and acute myocardial infarction. The definitions varied by the choice of diagnostic codes and the inclusion of procedure codes and lab values. They were then used to empirically study an array of analytical methods with various analytical choices in four observational healthcare databases. The methods were executed against predefined drug-HOI pairs to generate an effect estimate and standard error for each pair. These test cases included positive controls (active ingredients with evidence to suspect a positive association with the outcome) and negative controls (active ingredients with no evidence to expect an effect on the outcome). Three different performance metrics where used: (i) Area Under the Receiver Operator Characteristics (ROC) curve (AUC) as a measure of a method's ability to distinguish between positive and negative test cases, (ii) Measure of bias by estimation of distribution of observed effect estimates for the negative test pairs where the true effect can be assumed to be one (no relative risk), and (iii) Minimal Detectable Relative Risk (MDRR) as a measure of whether there is sufficient power to generate effect estimates. In the three outcomes studied, different definitions of outcomes show comparable ability to differentiate true from false control cases (AUC) and a similar bias estimation. However, broader definitions generating larger outcome cohorts allowed more drugs to be studied with sufficient statistical power. Broader definitions are preferred since they allow studying drugs with lower prevalence than the more precise or narrow definitions while showing comparable performance characteristics in differentiation of signal vs. no signal as well as effect size estimation.
A Penalized Robust Method for Identifying Gene-Environment Interactions
Shi, Xingjie; Liu, Jin; Huang, Jian; Zhou, Yong; Xie, Yang; Ma, Shuangge
2015-01-01
In high-throughput studies, an important objective is to identify gene-environment interactions associated with disease outcomes and phenotypes. Many commonly adopted methods assume specific parametric or semiparametric models, which may be subject to model mis-specification. In addition, they usually use significance level as the criterion for selecting important interactions. In this study, we adopt the rank-based estimation, which is much less sensitive to model specification than some of the existing methods and includes several commonly encountered data and models as special cases. Penalization is adopted for the identification of gene-environment interactions. It achieves simultaneous estimation and identification and does not rely on significance level. For computation feasibility, a smoothed rank estimation is further proposed. Simulation shows that under certain scenarios, for example with contaminated or heavy-tailed data, the proposed method can significantly outperform the existing alternatives with more accurate identification. We analyze a lung cancer prognosis study with gene expression measurements under the AFT (accelerated failure time) model. The proposed method identifies interactions different from those using the alternatives. Some of the identified genes have important implications. PMID:24616063
Penn, Elizabeth Maggie
2014-01-01
This article presents a new model for scoring alternatives from “contest” outcomes. The model is a generalization of the method of paired comparison to accommodate comparisons between arbitrarily sized sets of alternatives in which outcomes are any division of a fixed prize. Our approach is also applicable to contests between varying quantities of alternatives. We prove that under a reasonable condition on the comparability of alternatives, there exists a unique collection of scores that produces accurate estimates of the overall performance of each alternative and satisfies a well-known axiom regarding choice probabilities. We apply the method to several problems in which varying choice sets and continuous outcomes may create problems for standard scoring methods. These problems include measuring centrality in network data and the scoring of political candidates via a “feeling thermometer.” In the latter case, we also use the method to uncover and solve a potential difficulty with common methods of rescaling thermometer data to account for issues of interpersonal comparability. PMID:24748759
Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, Michal; Standara, Michal; Starčuk, Zenon; Taxt, Torfinn
2016-03-01
One of the main challenges in quantitative dynamic contrast-enhanced (DCE) MRI is estimation of the arterial input function (AIF). Usually, the signal from a single artery (ignoring contrast dispersion, partial volume effects and flow artifacts) or a population average of such signals (also ignoring variability between patients) is used. Multi-channel blind deconvolution is an alternative approach avoiding most of these problems. The AIF is estimated directly from the measured tracer concentration curves in several tissues. This contribution extends the published methods of multi-channel blind deconvolution by applying a more realistic model of the impulse residue function, the distributed capillary adiabatic tissue homogeneity model (DCATH). In addition, an alternative AIF model is used and several AIF-scaling methods are tested. The proposed method is evaluated on synthetic data with respect to the number of tissue regions and to the signal-to-noise ratio. Evaluation on clinical data (renal cell carcinoma patients before and after the beginning of the treatment) gave consistent results. An initial evaluation on clinical data indicates more reliable and less noise sensitive perfusion parameter estimates. Blind multi-channel deconvolution using the DCATH model might be a method of choice for AIF estimation in a clinical setup. © 2015 Wiley Periodicals, Inc.
Can we improve C IV-based single epoch black hole mass estimations?
NASA Astrophysics Data System (ADS)
Mejía-Restrepo, J. E.; Trakhtenbrot, B.; Lira, P.; Netzer, H.
2018-05-01
In large optical surveys at high redshifts (z > 2), the C IV broad emission line is the most practical alternative to estimate the mass (MBH) of active super-massive black holes (SMBHs). However, mass determinations obtained with this line are known to be highly uncertain. In this work we use the Sloan Digital Sky Survey Data Release 7 and 12 quasar catalogues to statistically test three alternative methods put forward in the literature to improve C IV-based MBH estimations. These methods are constructed from correlations between the ratio of the C IV line-width to the low ionization line-widths (Hα, Hβ and Mg II) and several other properties of rest-frame UV emission lines. Our analysis suggests that these correction methods are of limited applicability, mostly because all of them depend on correlations that are driven by the linewidth of the C IV profile itself and not by an interconnection between the linewidth of the C IV line with the linewidth of the low ionization lines. Our results show that optical C IV-based mass estimates at high redshift cannot be a proper replacement for estimates based on IR spectroscopy of low ionization lines like Hα, Hβ and Mg II.
ERIC Educational Resources Information Center
Klinger, Don A.; Rogers, W. Todd
2003-01-01
The estimation accuracy of procedures based on classical test score theory and item response theory (generalized partial credit model) were compared for examinations consisting of multiple-choice and extended-response items. Analysis of British Columbia Scholarship Examination results found an error rate of about 10 percent for both methods, with…
A call to improve methods for estimating tree biomass for regional and national assessments
Aaron R. Weiskittel; David W. MacFarlane; Philip J. Radtke; David L.R. Affleck; Hailemariam Temesgen; Christopher W. Woodall; James A. Westfall; John W. Coulston
2015-01-01
Tree biomass is typically estimated using statistical models. This review highlights five limitations of most tree biomass models, which include the following: (1) biomass data are costly to collect and alternative sampling methods are used; (2) belowground data and models are generally lacking; (3) models are often developed from small and geographically limited data...
A comparison of alternative methods for estimating the self-thinning boundary line
Lianjun Zhang; Huiquan Bi; Jeffrey H. Gove; Linda S. Heath
2005-01-01
The fundamental validity of the self-thinning "law" has been debated over the last three decades. A long-sanding concern centers on how to objectively select data points for fitting the self-thinning line and the most appropriate regression method for estimating the two coefficients. Using data from an even-aged Pinus strobus L. stand as an...
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
An Alternative Procedure for Estimating Unit Learning Curves,
1985-09-01
the model accurately describes the real-life situation, i.e., when the model is properly applied to the data, it can be a powerful tool for...predicting unit production costs. There are, however, some unique estimation problems inherent in the model . The usual method of generating predicted unit...production costs attempts to extend properties of least squares estimators to non- linear functions of these estimators. The result is biased estimates of
Review of Recent Methodological Developments in Group-Randomized Trials: Part 2-Analysis.
Turner, Elizabeth L; Prague, Melanie; Gallis, John A; Li, Fan; Murray, David M
2017-07-01
In 2004, Murray et al. reviewed methodological developments in the design and analysis of group-randomized trials (GRTs). We have updated that review with developments in analysis of the past 13 years, with a companion article to focus on developments in design. We discuss developments in the topics of the earlier review (e.g., methods for parallel-arm GRTs, individually randomized group-treatment trials, and missing data) and in new topics, including methods to account for multiple-level clustering and alternative estimation methods (e.g., augmented generalized estimating equations, targeted maximum likelihood, and quadratic inference functions). In addition, we describe developments in analysis of alternative group designs (including stepped-wedge GRTs, network-randomized trials, and pseudocluster randomized trials), which require clustering to be accounted for in their design and analysis.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-24
... Equation W-7 to allow for reporters to use alternative methods such as engineering estimates based on best... requirement in 40 CFR 98.236 for reporting of ``annual throughput as determined by engineering estimate based...
Biologically plausible particulate air pollution mortality concentration-response functions.
Roberts, Steven
2004-01-01
In this article I introduce an alternative method for estimating particulate air pollution mortality concentration-response functions. This method constrains the particulate air pollution mortality concentration-response function to be biologically plausible--that is, a non-decreasing function of the particulate air pollution concentration. Using time-series data from Cook County, Illinois, the proposed method yields more meaningful particulate air pollution mortality concentration-response function estimates with an increase in statistical accuracy. PMID:14998745
ERIC Educational Resources Information Center
Woods, Carol M.; Thissen, David
2006-01-01
The purpose of this paper is to introduce a new method for fitting item response theory models with the latent population distribution estimated from the data using splines. A spline-based density estimation system provides a flexible alternative to existing procedures that use a normal distribution, or a different functional form, for the…
Herbert Ssegane; Devendra M. Amatya; E.W. Tollner; Zhaohua Dai; Jami E. Nettles
2013-01-01
Commonly used methods to predict streamflow at ungauged watersheds implicitly predict streamflow magnitude and temporal sequence concurrently. An alternative approach that has not been fully explored is the conceptualization of streamflow as a composite of two separable components of magnitude and sequence, where each component is estimated separately and then combined...
Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions
Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.
2012-01-01
In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661
Yasaitis, Laura C; Arcaya, Mariana C; Subramanian, S V
2015-09-01
Creating local population health measures from administrative data would be useful for health policy and public health monitoring purposes. While a wide range of options--from simple spatial smoothers to model-based methods--for estimating such rates exists, there are relatively few side-by-side comparisons, especially not with real-world data. In this paper, we compare methods for creating local estimates of acute myocardial infarction rates from Medicare claims data. A Bayesian Monte Carlo Markov Chain estimator that incorporated spatial and local random effects performed best, followed by a method-of-moments spatial Empirical Bayes estimator. As the former is more complicated and time-consuming, spatial linear Empirical Bayes methods may represent a good alternative for non-specialist investigators. Copyright © 2015 Elsevier Ltd. All rights reserved.
Alternative Methods to Standby Gain Scheduling Following Air Data System Failure
2009-09-01
in the event of air data system failures. There are two problems with this current method. First, the pilot must take time away from other ...pertinent tasks to manually position the standby-gains via the landing gear handle, air-to-air refueling door switch or some other means. Second, the...the way, the original airspeed estimator was improved and two other alternatives to standby-gain-scheduling were investigated. Knowing what
NASA Astrophysics Data System (ADS)
Xiong, Yan; Reichenbach, Stephen E.
1999-01-01
Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.
NASA Astrophysics Data System (ADS)
Anayah, F. M.; Kaluarachchi, J. J.
2014-06-01
Reliable estimation of evapotranspiration (ET) is important for the purpose of water resources planning and management. Complementary methods, including complementary relationship areal evapotranspiration (CRAE), advection aridity (AA) and Granger and Gray (GG), have been used to estimate ET because these methods are simple and practical in estimating regional ET using meteorological data only. However, prior studies have found limitations in these methods especially in contrasting climates. This study aims to develop a calibration-free universal method using the complementary relationships to compute regional ET in contrasting climatic and physical conditions with meteorological data only. The proposed methodology consists of a systematic sensitivity analysis using the existing complementary methods. This work used 34 global FLUXNET sites where eddy covariance (EC) fluxes of ET are available for validation. A total of 33 alternative model variations from the original complementary methods were proposed. Further analysis using statistical methods and simplified climatic class definitions produced one distinctly improved GG-model-based alternative. The proposed model produced a single-step ET formulation with results equal to or better than the recent studies using data-intensive, classical methods. Average root mean square error (RMSE), mean absolute bias (BIAS) and R2 (coefficient of determination) across 34 global sites were 20.57 mm month-1, 10.55 mm month-1 and 0.64, respectively. The proposed model showed a step forward toward predicting ET in large river basins with limited data and requiring no calibration.
WTA estimates using the method of paired comparison: tests of robustness
Patricia A. Champ; John B. Loomis
1998-01-01
The method of paired comparison is modified to allow choices between two alternative gains so as to estimate willingness to accept (WTA) without loss aversion. The robustness of WTA values for two public goods is tested with respect to sensitivity of theWTA measure to the context of the bundle of goods used in the paired comparison exercise and to the scope (scale) of...
NASA Astrophysics Data System (ADS)
See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.
2018-04-01
This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.
Comparing methodologies for the allocation of overhead and capital costs to hospital services.
Tan, Siok Swan; van Ineveld, Bastianus Martinus; Redekop, William Ken; Hakkaart-van Roijen, Leona
2009-06-01
Typically, little consideration is given to the allocation of indirect costs (overheads and capital) to hospital services, compared to the allocation of direct costs. Weighted service allocation is believed to provide the most accurate indirect cost estimation, but the method is time consuming. To determine whether hourly rate, inpatient day, and marginal mark-up allocation are reliable alternatives for weighted service allocation. The cost approaches were compared independently for appendectomy, hip replacement, cataract, and stroke in representative general hospitals in The Netherlands for 2005. Hourly rate allocation and inpatient day allocation produce estimates that are not significantly different from weighted service allocation. Hourly rate allocation may be a strong alternative to weighted service allocation for hospital services with a relatively short inpatient stay. The use of inpatient day allocation would likely most closely reflect the indirect cost estimates obtained by the weighted service method.
Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.
Nguyen, Hien D; Wood, Ian A
2016-04-01
Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.
Methods for analysis of cracks in three-dimensional solids
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1984-01-01
Various analytical and numerical methods used to evaluate the stress intensity factors for cracks in three-dimensional (3-D) solids are reviewed. Classical exact solutions and many of the approximate methods used in 3-D analyses of cracks are reviewed. The exact solutions for embedded elliptic cracks in infinite solids are discussed. The approximate methods reviewed are the finite element methods, the boundary integral equation (BIE) method, the mixed methods (superposition of analytical and finite element method, stress difference method, discretization-error method, alternating method, finite element-alternating method), and the line-spring model. The finite element method with singularity elements is the most widely used method. The BIE method only needs modeling of the surfaces of the solid and so is gaining popularity. The line-spring model appears to be the quickest way to obtain good estimates of the stress intensity factors. The finite element-alternating method appears to yield the most accurate solution at the minimum cost.
An open tool for input function estimation and quantification of dynamic PET FDG brain scans.
Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro
2016-08-01
Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.
Erman, A; Sathya, A; Nam, A; Bielecki, J M; Feld, J J; Thein, H-H; Wong, W W L; Grootendorst, P; Krahn, M D
2018-05-01
Chronic hepatitis C (CHC) is a leading cause of hepatic fibrosis and cirrhosis. The level of fibrosis is traditionally established by histology, and prognosis is estimated using fibrosis progression rates (FPRs; annual probability of progressing across histological stages). However, newer noninvasive alternatives are quickly replacing biopsy. One alternative, transient elastography (TE), quantifies fibrosis by measuring liver stiffness (LSM). Given these developments, the purpose of this study was (i) to estimate prognosis in treatment-naïve CHC patients using TE-based liver stiffness progression rates (LSPR) as an alternative to FPRs and (ii) to compare consistency between LSPRs and FPRs. A systematic literature search was performed using multiple databases (January 1990 to February 2016). LSPRs were calculated using either a direct method (given the difference in serial LSMs and time elapsed) or an indirect method given a single LSM and the estimated duration of infection and pooled using random-effects meta-analyses. For validation purposes, FPRs were also estimated. Heterogeneity was explored by random-effects meta-regression. Twenty-seven studies reporting on 39 groups of patients (N = 5874) were identified with 35 groups allowing for indirect and 8 for direct estimation of LSPR. The majority (~58%) of patients were HIV/HCV-coinfected. The estimated time-to-cirrhosis based on TE vs biopsy was 39 and 38 years, respectively. In univariate meta-regressions, male sex and HIV were positively and age at assessment, negatively associated with LSPRs. Noninvasive prognosis of HCV is consistent with FPRs in predicting time-to-cirrhosis, but more longitudinal studies of liver stiffness are needed to obtain refined estimates. © 2017 John Wiley & Sons Ltd.
A comparative review of estimates of the proportion unchanged genes and the false discovery rate
Broberg, Per
2005-01-01
Background In the analysis of microarray data one generally produces a vector of p-values that for each gene give the likelihood of obtaining equally strong evidence of change by pure chance. The distribution of these p-values is a mixture of two components corresponding to the changed genes and the unchanged ones. The focus of this article is how to estimate the proportion unchanged and the false discovery rate (FDR) and how to make inferences based on these concepts. Six published methods for estimating the proportion unchanged genes are reviewed, two alternatives are presented, and all are tested on both simulated and real data. All estimates but one make do without any parametric assumptions concerning the distributions of the p-values. Furthermore, the estimation and use of the FDR and the closely related q-value is illustrated with examples. Five published estimates of the FDR and one new are presented and tested. Implementations in R code are available. Results A simulation model based on the distribution of real microarray data plus two real data sets were used to assess the methods. The proposed alternative methods for estimating the proportion unchanged fared very well, and gave evidence of low bias and very low variance. Different methods perform well depending upon whether there are few or many regulated genes. Furthermore, the methods for estimating FDR showed a varying performance, and were sometimes misleading. The new method had a very low error. Conclusion The concept of the q-value or false discovery rate is useful in practical research, despite some theoretical and practical shortcomings. However, it seems possible to challenge the performance of the published methods, and there is likely scope for further developing the estimates of the FDR. The new methods provide the scientist with more options to choose a suitable method for any particular experiment. The article advocates the use of the conjoint information regarding false positive and negative rates as well as the proportion unchanged when identifying changed genes. PMID:16086831
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study
NASA Technical Reports Server (NTRS)
Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.
2010-01-01
This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Carl; Rahman, Mahmudur; Johnson, Ann
2013-07-01
The U.S. Army Corps of Engineers (USACE) - Philadelphia District is conducting an environmental restoration at the DuPont Chambers Works in Deepwater, New Jersey under the Formerly Utilized Sites Remedial Action Program (FUSRAP). Discrete locations are contaminated with natural uranium, thorium-230 and radium-226. The USACE is proposing a preferred remedial alternative consisting of excavation and offsite disposal to address soil contamination followed by monitored natural attenuation to address residual groundwater contamination. Methods were developed to quantify the error associated with contaminant volume estimates and use mass balance calculations of the uranium plume to estimate the removal efficiency of the proposedmore » alternative. During the remedial investigation, the USACE collected approximately 500 soil samples at various depths. As the first step of contaminant mass estimation, soil analytical data was segmented into several depth intervals. Second, using contouring software, analytical data for each depth interval was contoured to determine lateral extent of contamination. Six different contouring algorithms were used to generate alternative interpretations of the lateral extent of the soil contamination. Finally, geographical information system software was used to produce a three dimensional model in order to present both lateral and vertical extent of the soil contamination and to estimate the volume of impacted soil for each depth interval. The average soil volume from all six contouring methods was used to determine the estimated volume of impacted soil. This method also allowed an estimate of a standard deviation of the waste volume estimate. It was determined that the margin of error for the method was plus or minus 17% of the waste volume, which is within the acceptable construction contingency for cost estimation. USACE collected approximately 190 groundwater samples from 40 monitor wells. It is expected that excavation and disposal of contaminated soil will remove the contaminant source zone and significantly reduce contaminant concentrations in groundwater. To test this assumption, a mass balance evaluation was performed to estimate the amount of dissolved uranium that would remain in the groundwater after completion of soil excavation. As part of this evaluation, average groundwater concentrations for the pre-excavation and post-excavation aquifer plume area were calculated to determine the percentage of plume removed during excavation activities. In addition, the volume of the plume removed during excavation dewatering was estimated. The results of the evaluation show that approximately 98% of the aqueous uranium would be removed during the excavation phase. The USACE expects that residual levels of contamination will remain in groundwater after excavation of soil but at levels well suited for the selection of excavation combined with monitored natural attenuation as a preferred alternative. (authors)« less
Bernard R. Parresol; Charles E. Thomas
1996-01-01
In the wood utilization industry, both stem profile and biomass are important quantities. The two have traditionally been estimated separately. The introduction of a density-integral method allows for coincident estimation of stem profile and biomass, based on the calculus of mass theory, and provides an alternative to weight-ratio methodology. In the initial...
Robert E. Keane; Laura J. Dickinson
2007-01-01
Fire managers need better estimates of fuel loading so they can more accurately predict the potential fire behavior and effects of alternative fuel and ecosystem restoration treatments. This report presents a new fuel sampling method, called the photoload sampling technique, to quickly and accurately estimate loadings for six common surface fuel components (1 hr, 10 hr...
Beckowski, Meghan Short; Goyal, Abhinav; Goetzel, Ron Z; Rinehart, Christine L; Darling, Kathryn J; Yarborough, Charles M
2012-08-01
To determine the most appropriate methods for estimating the prevalence and incidence of coronary heart disease (CHD), the associated risk factors, and health care costs in a corporate setting. Using medical insurance claims data for the period of 2005-2009 from 18 companies in the Thomson Reuters MarketScan reg database, we evaluated three alternative methods. Prevalence of CHD ranged from 2.1% to 4.0% using a method requiring a second confirmatory claim. Annual incidence of CHD ranged from 1.0% to 1.6% using a method requiring 320 days of benefits enrollment in the previous year, and one claim for a diagnosis of CHD. Alternative methods for determining the epidemiologic and cost burden of CHD using insurance claims data were explored. These methods can inform organizations that want to quantify the health and cost burden of various diseases common among an employed population.
Ragagnin, Marilia Nagata; Gorman, Daniel; McCarthy, Ian Donald; Sant'Anna, Bruno Sampaio; de Castro, Cláudio Campi; Turra, Alexander
2018-01-11
Obtaining accurate and reproducible estimates of internal shell volume is a vital requirement for studies into the ecology of a range of shell-occupying organisms, including hermit crabs. Shell internal volume is usually estimated by filling the shell cavity with water or sand, however, there has been no systematic assessment of the reliability of these methods and moreover no comparison with modern alternatives, e.g., computed tomography (CT). This study undertakes the first assessment of the measurement reproducibility of three contrasting approaches across a spectrum of shell architectures and sizes. While our results suggested a certain level of variability inherent for all methods, we conclude that a single measure using sand/water is likely to be sufficient for the majority of studies. However, care must be taken as precision may decline with increasing shell size and structural complexity. CT provided less variation between repeat measures but volume estimates were consistently lower compared to sand/water and will need methodological improvements before it can be used as an alternative. CT indicated volume may be also underestimated using sand/water due to the presence of air spaces visible in filled shells scanned by CT. Lastly, we encourage authors to clearly describe how volume estimates were obtained.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using
The 'Own Children' fertility estimation procedure: a reappraisal.
Avery, Christopher; St Clair, Travis; Levin, Michael; Hill, Kenneth
2013-07-01
The Full Birth History has become the dominant source of estimates of fertility levels and trends for countries lacking complete birth registration. An alternative, the 'Own Children' method, derives fertility estimates from household age distributions, but is now rarely used, partly because of concerns about its accuracy. We compared the estimates from these two procedures by applying them to 56 recent Demographic and Health Surveys. On average, 'Own Children' estimates of recent total fertility rates are 3 per cent lower than birth-history estimates. Much of this difference stems from selection bias in the collection of birth histories: women with more children are more likely to be interviewed. We conclude that full birth histories overestimate total fertility, and that the 'Own Children' method gives estimates of total fertility that may better reflect overall national fertility. We recommend the routine application of the 'Own Children' method to census and household survey data to estimate fertility levels and trends.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk
2009-01-12
An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented anmore » offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less
E.G. McPherson
2007-01-01
Benefit-based tree valuation provides alternative estimates of the fair and reasonable value of trees while illustrating the relative contribution of different benefit types. This study compared estimates of tree value obtained using cost- and benefit-based approaches. The cost-based approach used the Council of Landscape and Tree Appraisers trunk formula method, and...
Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi
2015-07-01
Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.
Alternative methods of salt disposal at the seven salt sites for a nuclear waste repository
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-02-01
This study discusses the various alternative salt management techniques for the disposal of excess mined salt at seven potentially acceptable nuclear waste repository sites: Deaf Smith and Swisher Counties, Texas; Richton and Cypress Creek Domes, Mississippi; Vacherie Dome, Louisiana; and Davis and Lavender Canyons, Utah. Because the repository development involves the underground excavation of corridors and waste emplacement rooms, in either bedded or domed salt formations, excess salt will be mined and must be disposed of offsite. The salt disposal alternatives examined for all the sites include commercial use, ocean disposal, deep well injection, landfill disposal, and underground mine disposal.more » These alternatives (and other site-specific disposal methods) are reviewed, using estimated amounts of excavated, backfilled, and excess salt. Methods of transporting the excess salt are discussed, along with possible impacts of each disposal method and potential regulatory requirements. A preferred method of disposal is recommended for each potentially acceptable repository site. 14 refs., 5 tabs.« less
NASA Technical Reports Server (NTRS)
Abbey, Craig K.; Eckstein, Miguel P.
2002-01-01
We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.
ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES
LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.
2008-01-01
Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508
Web-based surveys as an alternative to traditional mail methods.
Fleming, Christopher M; Bowden, Mark
2009-01-01
Environmental economists have long used surveys to gather information about people's preferences. A recent innovation in survey methodology has been the advent of web-based surveys. While the Internet appears to offer a promising alternative to conventional survey administration modes, concerns exist over potential sampling biases associated with web-based surveys and the effect these may have on valuation estimates. This paper compares results obtained from a travel cost questionnaire of visitors to Fraser Island, Australia, that was conducted using two alternate survey administration modes; conventional mail and web-based. It is found that response rates and the socio-demographic make-up of respondents to the two survey modes are not statistically different. Moreover, both modes yield similar consumer surplus estimates.
Gomis, Melissa Ines; Wang, Zhanyun; Scheringer, Martin; Cousins, Ian T
2015-02-01
Long-chain perfluoroalkyl carboxylic acids (PFCAs) and perfluoroalkane sulfonic acids (PFSAs) are persistent, bioaccumulative, and toxic contaminants that are globally present in the environment, wildlife and humans. Phase-out actions and use restrictions to reduce the environmental release of long-chain PFCAs, PFSAs and their precursors have been taken since 2000. In particular, long-chain poly- and perfluoroalkyl substances (PFASs) are being replaced with shorter-chain homologues or other fluorinated or non-fluorinated alternatives. A key question is: are these alternatives, particularly the structurally similar fluorinated alternatives, less hazardous to humans and the environment than the substances they replace? Several fluorinated alternatives including perfluoroether carboxylic acids (PFECAs) and perfluoroether sulfonic acids (PFESAs) have been recently identified. However, the scarcity of experimental data prevents hazard and risk assessments for these substances. In this study, we use state-of-the-art in silico tools to estimate key properties of these newly identified fluorinated alternatives. [i] COSMOtherm and SPARC are used to estimate physicochemical properties. The US EPA EPISuite software package is used to predict degradation half-lives in air, water and soil. [ii] In combination with estimated chemical properties, a fugacity-based multimedia mass-balance unit-world model - the OECD Overall Persistence (POV) and Long-Range Transport Potential (LRTP) Screening Tool - is used to assess the likely environmental fate of these alternatives. Even though the fluorinated alternatives contain some structural differences, their physicochemical properties are not significantly different from those of their predecessors. Furthermore, most of the alternatives are estimated to be similarly persistent and mobile in the environment as the long-chain PFASs. The models therefore predict that the fluorinated alternatives will become globally distributed in the environment similar to their predecessors. Although such in silico methods are coupled with uncertainties, this preliminary assessment provides enough cause for concern to warrant experimental work to better determine the properties of these fluorinated alternatives. Copyright © 2014 Elsevier B.V. All rights reserved.
Antti T. Kaartinen; Jeremy S. Fried; Paul A. Dunham
2002-01-01
Three Landsat TM-based GIS layers were evaluated as alternatives to conventional, photointerpretation-based stratification of FIA field plots. Estimates for timberland area, timber volume, and volume of down wood were calculated for California's North Coast Survey Unit of 2.5 million hectares. The estimates were compared on the basis of standard errors,...
Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'
NASA Technical Reports Server (NTRS)
Errico, Ronald M.; Prive, Nikki C.; Gu, Wei
2014-01-01
The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.
Abrahamson, Joseph P; Zelina, Joseph; Andac, M Gurhan; Vander Wal, Randy L
2016-11-01
The first order approximation (FOA3) currently employed to estimate BC mass emissions underpredicts BC emissions due to inaccuracies in measuring low smoke numbers (SNs) produced by modern high bypass ratio engines. The recently developed Formation and Oxidation (FOX) method removes the need for and hence uncertainty associated with (SNs), instead relying upon engine conditions in order to predict BC mass. Using the true engine operating conditions from proprietary engine cycle data an improved FOX (ImFOX) predictive relation is developed. Still, the current methods are not optimized to estimate cruise emissions nor account for the use of alternative jet fuels with reduced aromatic content. Here improved correlations are developed to predict engine conditions and BC mass emissions at ground and cruise altitude. This new ImFOX is paired with a newly developed hydrogen relation to predict emissions from alternative fuels and fuel blends. The ImFOX is designed for rich-quench-lean style combustor technologies employed predominately in the current aviation fleet.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Linear models: permutation methods
Cade, B.S.; Everitt, B.S.; Howell, D.C.
2005-01-01
Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...
Bult, Johannes H F; van Putten, Bram; Schifferstein, Hendrik N J; Roozen, Jacques P; Voragen, Alphons G J; Kroeze, Jan H A
2004-10-01
In continuous vigilance tasks, the number of coincident panel responses to stimuli provides an index of stimulus detectability. To determine whether this number is due to chance, panel noise levels have been approximated by the maximum coincidence level obtained in stimulus-free conditions. This study proposes an alternative method by which to assess noise levels, derived from queuing system theory (QST). Instead of critical coincidence levels, QST modeling estimates the duration of coinciding responses in the absence of stimuli. The proposed method has the advantage over previous approaches that it yields more reliable noise estimates and allows for statistical testing. The method was applied in an olfactory detection experiment using 16 panelists in stimulus-present and stimulus-free conditions. We propose that QST may be used as an alternative to signal detection theory for analyzing data from continuous vigilance tasks.
Outcome modelling strategies in epidemiology: traditional methods and basic alternatives
Greenland, Sander; Daniel, Rhian; Pearce, Neil
2016-01-01
Abstract Controlling for too many potential confounders can lead to or aggravate problems of data sparsity or multicollinearity, particularly when the number of covariates is large in relation to the study size. As a result, methods to reduce the number of modelled covariates are often deployed. We review several traditional modelling strategies, including stepwise regression and the ‘change-in-estimate’ (CIE) approach to deciding which potential confounders to include in an outcome-regression model for estimating effects of a targeted exposure. We discuss their shortcomings, and then provide some basic alternatives and refinements that do not require special macros or programming. Throughout, we assume the main goal is to derive the most accurate effect estimates obtainable from the data and commercial software. Allowing that most users must stay within standard software packages, this goal can be roughly approximated using basic methods to assess, and thereby minimize, mean squared error (MSE). PMID:27097747
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
Optical remote sensing for forest area estimation
Randolph H. Wynne; Richard G. Oderwald; Gregory A. Reams; John A. Scrivani
2000-01-01
The air photo dot-count method is now widely and successfully used for estimating operational forest area in the USDA Forest Inventory and Analysis (FIA) program. Possible alternatives that would provide for more frequent updates, spectral change detection, and maps of forest area include the AVHRR calibration center technique and various Landsat TM classification...
On Some Confidence Intervals for Estimating the Mean of a Skewed Population
ERIC Educational Resources Information Center
Shi, W.; Kibria, B. M. Golam
2007-01-01
A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…
ERIC Educational Resources Information Center
Diaz, Juan Jose; Handa, Sudhanshu
2006-01-01
Not all policy questions can be addressed by social experiments. Nonexperimental evaluation methods provide an alternative to experimental designs but their results depend on untestable assumptions. This paper presents evidence on the reliability of propensity score matching (PSM), which estimates treatment effects under the assumption of…
Toward a Value for Guided Rafting on Southern Rivers
J. Michael Bowker; Donald B.K. English; Jason A. Donovan
1996-01-01
This study examines per trip consumer surplus associated with guided whitewater rafting on two southern rivers. First, household recreation demand functions are estimated based on the individual travel cost model using truncated count data regression methods and alternative price specifications. Findings show mean per trip consumer surplus point estimates between $89...
ERIC Educational Resources Information Center
Perry, Thomas
2017-01-01
Value-added (VA) measures are currently the predominant approach used to compare the effectiveness of schools. Recent educational effectiveness research, however, has developed alternative approaches including the regression discontinuity (RD) design, which also allows estimation of absolute school effects. Initial research suggests RD is a viable…
Properties of Endogenous Post-Stratified Estimation using remote sensing data
John Tipton; Jean Opsomer; Gretchen Moisen
2013-01-01
Post-stratification is commonly used to improve the precision of survey estimates. In traditional poststratification methods, the stratification variable must be known at the population level. When suitable covariates are available at the population level, an alternative approach consists of fitting a model on the covariates, making predictions for the population and...
A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...
A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...
Development and evaluation of the photoload sampling technique
Robert E. Keane; Laura J. Dickinson
2007-01-01
Wildland fire managers need better estimates of fuel loading so they can accurately predict potential fire behavior and effects of alternative fuel and ecosystem restoration treatments. This report presents the development and evaluation of a new fuel sampling method, called the photoload sampling technique, to quickly and accurately estimate loadings for six common...
Ribeiro, T; Depres, S; Couteau, G; Pauss, A
2003-01-01
An alternative method for the estimation of nitrate and nitrogen forms in vegetables is proposed. Nitrate can be directly estimated by UV-spectrophotometry after an extraction step with water. The other nitrogen compounds are photo-oxidized into nitrate, and then estimated by UV-spectrophotometry. An oxidative solution of sodium persulfate and a Hg-UV lamp is used. Preliminary assays were realized with vegetables like salade, spinachs, artichokes, small peas, broccolis, carrots, watercress; acceptable correlations between expected and experimental values of nitrate amounts were obtained, while the detection limit needs to be lowered. The optimization of the method is underway.
Decoupling Intensity Radiated by the Emitter in Distance Estimation from Camera to IR Emitter
Cano-García, Angel E.; Galilea, José Luis Lázaro; Fernández, Pedro; Infante, Arturo Luis; Pompa-Chacón, Yamilet; Vázquez, Carlos Andrés Luna
2013-01-01
Various models using radiometric approach have been proposed to solve the problem of estimating the distance between a camera and an infrared emitter diode (IRED). They depend directly on the radiant intensity of the emitter, set by the IRED bias current. As is known, this current presents a drift with temperature, which will be transferred to the distance estimation method. This paper proposes an alternative approach to remove temperature drift in the distance estimation method by eliminating the dependence on radiant intensity. The main aim was to use the relative accumulated energy together with other defined models, such as the zeroth-frequency component of the FFT of the IRED image and the standard deviation of pixel gray level intensities in the region of interest containing the IRED image. By using the abovementioned models, an expression free of IRED radiant intensity was obtained. Furthermore, the final model permitted simultaneous estimation of the distance between the IRED and the camera and the IRED orientation angle. The alternative presented in this paper gave a 3% maximum relative error over a range of distances up to 3 m. PMID:23727954
An empirical Bayes approach to analyzing recurring animal surveys
Johnson, D.H.
1989-01-01
Recurring estimates of the size of animal populations are often required by biologists or wildlife managers. Because of cost or other constraints, estimates frequently lack the accuracy desired but cannot readily be improved by additional sampling. This report proposes a statistical method employing empirical Bayes (EB) estimators as alternatives to those customarily used to estimate population size, and evaluates them by a subsampling experiment on waterfowl surveys. EB estimates, especially a simple limited-translation version, were more accurate and provided shorter confidence intervals with greater coverage probabilities than customary estimates.
Doubly robust nonparametric inference on the average treatment effect.
Benkeser, D; Carone, M; Laan, M J Van Der; Gilbert, P B
2017-12-01
Doubly robust estimators are widely used to draw inference about the average effect of a treatment. Such estimators are consistent for the effect of interest if either one of two nuisance parameters is consistently estimated. However, if flexible, data-adaptive estimators of these nuisance parameters are used, double robustness does not readily extend to inference. We present a general theoretical study of the behaviour of doubly robust estimators of an average treatment effect when one of the nuisance parameters is inconsistently estimated. We contrast different methods for constructing such estimators and investigate the extent to which they may be modified to also allow doubly robust inference. We find that while targeted minimum loss-based estimation can be used to solve this problem very naturally, common alternative frameworks appear to be inappropriate for this purpose. We provide a theoretical study and a numerical evaluation of the alternatives considered. Our simulations highlight the need for and usefulness of these approaches in practice, while our theoretical developments have broad implications for the construction of estimators that permit doubly robust inference in other problems.
Stan T. Lebow; Patricia K. Lebow; Kolby C. Hirth
2017-01-01
Current standardized methods are not well-suited for estimating in-service preservative leaching from treated wood products. This study compared several alternative leaching methods to a commonly used standard method, and to leaching under natural exposure conditions. Small blocks or lumber specimens were pressure treated with a wood preservative containing borax and...
NASA Astrophysics Data System (ADS)
Vachálek, Ján
2011-12-01
The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.
Bending fatigue tests on SiC-Al tapes under alternating stress at room temperature
NASA Technical Reports Server (NTRS)
Herzog, J. A.
1981-01-01
The development of a testing method for fatigue tests on SiC-Al tapes containing a small amount of SiC filaments under alternating stress is reported. The fatigue strength curves resulting for this composite are discussed. They permit an estimate of its behavior under continuous stress and in combination with various other matrices, especially metal matrices.
Stewart, Anne M.; Callegary, James B.; Smith, Christopher F.; Gupta, Hoshin V.; Leenhouts, James M.; Fritzinger, Robert A.
2012-01-01
The continuous slope-area (CSA) method is an innovative gaging method for indirect computation of complete-event discharge hydrographs that can be applied when direct measurement methods are unsafe, impractical, or impossible to apply. This paper reports on use of the method to produce event-specific discharge hydrographs in a network of sand-bedded ephemeral stream channels in southeast Arizona, USA, for water year 2008. The method provided satisfactory discharge estimates for flows that span channel banks, and for moderate to large flows, with about 10–16% uncertainty, respectively for total flow volume and peak flow, as compared to results obtained with an alternate method. Our results also suggest that the CSA method may be useful for estimating runoff of small flows, and during recessions, but with increased uncertainty.
Incorporating ITS into transportation improvement planning : the Seattle Case Study using PRUEVIIN
DOT National Transportation Integrated Search
1998-01-01
This project explored methods to analyze ITS strategies within Major Investment Study (MIS) studies and to apply them in a case study. The case study developed methods to define alternatives, and to estimate impacts and costs at the level required fo...
Alternatives for Measuring the Unexplained Wage Gap.
ERIC Educational Resources Information Center
Toutkoushian, Robert K.; Hoffman, Emily P.
2002-01-01
Reviews several different methods that analysts can use to measure gender- and race-based pay differences for academic employees, and how they are interrelated. Discusses the advantages and disadvantages of each method, and shows how they can give rise to different estimates of pay disparity. (EV)
Generalized Structured Component Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun; Takane, Yoshio
2004-01-01
We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…
Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.
Cross-correlation of point series using a new method
NASA Technical Reports Server (NTRS)
Strothers, Richard B.
1994-01-01
Traditional methods of cross-correlation of two time series do not apply to point time series. Here, a new method, devised specifically for point series, utilizes a correlation measure that is based in the rms difference (or, alternatively, the median absolute difference) between nearest neightbors in overlapped segments of the two series. Error estimates for the observed locations of the points, as well as a systematic shift of one series with respect to the other to accommodate a constant, but unknown, lead or lag, are easily incorporated into the analysis using Monte Carlo techniques. A methodological restriction adopted here is that one series be treated as a template series against which the other, called the target series, is cross-correlated. To estimate a significance level for the correlation measure, the adopted alternative (null) hypothesis is that the target series arises from a homogeneous Poisson process. The new method is applied to cross-correlating the times of the greatest geomagnetic storms with the times of maximum in the undecennial solar activity cycle.
NASA Astrophysics Data System (ADS)
Wakeley, Heather L.
Alternative fuels could replace a significant portion of the 140 billion gallons of annual US gasoline use. Considerable attention is being paid to processes and technologies for producing alternative fuels, but an enormous investment in new infrastructure will be needed to have substantial impact on the demand for petroleum. The economics of production, distribution, and use, along with environmental impacts of these fuels, will determine the success or failure of a transition away from US petroleum dependence. This dissertation evaluates infrastructure requirements for ethanol and hydrogen as alternative fuels. It begins with an economic case study for ethanol and hydrogen in Iowa. A large-scale linear optimization model is developed to estimate average transportation distances and costs for nationwide ethanol production and distribution systems. Environmental impacts of transportation in the ethanol life cycle are calculated using the Economic Input-Output Life Cycle Assessment (EIO-LCA) model. An EIO-LCA Hybrid method is developed to evaluate impacts of future fuel production technologies. This method is used to estimate emissions for hydrogen production and distribution pathways. Results from the ethanol analyses indicate that the ethanol transportation cost component is significant and is the most variable. Costs for ethanol sold in the Midwest, near primary production centers, are estimated to be comparable to or lower than gasoline costs. Along with a wide range of transportation costs, environmental impacts for ethanol range over three orders of magnitude, depending on the transport required. As a result, intensive ethanol use should be encouraged near ethanol production areas. Fossil fuels are likely to remain the primary feedstock sources for hydrogen production in the near- and mid-term. Costs and environmental impacts of hydrogen produced from natural gas and transported by pipeline are comparable to gasoline. However, capital costs are prohibitive and a significant increase in natural gas demand will likely raise both prices and import quantities. There is an added challenge of developing hydrogen fuel cell vehicles at costs comparable to conventional vehicles. Two models developed in this thesis have proven useful for evaluating alternative fuels. The linear programming models provide representative estimates of distribution distances for regional fuel use, and thus can be used to estimate costs and environmental impacts. The EIO-LCA Hybrid method is useful for estimating emissions from hydrogen production. This model includes upstream impacts in the LCA, and has the benefit of a lower time and data requirements than a process-based LCA.
NASA Astrophysics Data System (ADS)
Lin, Y.; Bajcsy, P.; Valocchi, A. J.; Kim, C.; Wang, J.
2007-12-01
Natural systems are complex, thus extensive data are needed for their characterization. However, data acquisition is expensive; consequently we develop models using sparse, uncertain information. When all uncertainties in the system are considered, the number of alternative conceptual models is large. Traditionally, the development of a conceptual model has relied on subjective professional judgment. Good judgment is based on experience in coordinating and understanding auxiliary information which is correlated to the model but difficult to be quantified into the mathematical model. For example, groundwater recharge and discharge (R&D) processes are known to relate to multiple information sources such as soil type, river and lake location, irrigation patterns and land use. Although hydrologists have been trying to understand and model the interaction between each of these information sources and R&D processes, it is extremely difficult to quantify their correlations using a universal approach due to the complexity of the processes, the spatiotemporal distribution and uncertainty. There is currently no single method capable of estimating R&D rates and patterns for all practical applications. Chamberlin (1890) recommended use of "multiple working hypotheses" (alternative conceptual models) for rapid advancement in understanding of applied and theoretical problems. Therefore, cross analyzing R&D rates and patterns from various estimation methods and related field information will likely be superior to using only a single estimation method. We have developed the Pattern Recognition Utility (PRU), to help GIS users recognize spatial patterns from noisy 2D image. This GIS plug-in utility has been applied to help hydrogeologists establish alternative R&D conceptual models in a more efficient way than conventional methods. The PRU uses numerical methods and image processing algorithms to estimate and visualize shallow R&D patterns and rates. It can provide a fast initial estimate prior to planning labor intensive and time consuming field R&D measurements. Furthermore, the Spatial Pattern 2 Learn (SP2L) was developed to cross analyze results from the PRU with ancillary field information, such as land coverage, soil type, topographic maps and previous estimates. The learning process of SP2L cross examines each initially recognized R&D pattern with the ancillary spatial dataset, and then calculates a quantifiable reliability index for each R&D map using a supervised machine learning technique called decision tree. This JAVA based software package is capable of generating alternative R&D maps if the user decides to apply certain conditions recognized by the learning process. The reliability indices from SP2L will improve the traditionally subjective approach to initiating conceptual models by providing objectively quantifiable conceptual bases for further probabilistic and uncertainty analyses. Both the PRU and SP2L have been designed to be user-friendly and universal utilities for pattern recognition and learning to improve model predictions from sparse measurements by computer-assisted integration of spatially dense geospatial image data and machine learning of model dependencies.
Comparison of estimators of standard deviation for hydrologic time series
Tasker, Gary D.; Gilroy, Edward J.
1982-01-01
Unbiasing factors as a function of serial correlation, ρ, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation σ of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. Three methods provided estimates of σ which were much less biased but had greater mean square errors than the usual estimate of σ: s = (1/(n - 1) ∑ (xi −x¯)2)½. The three methods may be briefly characterized as (1) a method using a maximum likelihood estimate of the unbiasing factor, (2) a method using an empirical Bayes estimate of the unbiasing factor, and (3) a robust nonparametric estimate of σ suggested by Quenouille. Because s tends to underestimate σ, its use as an estimate of a model parameter results in a tendency to underdesign. If underdesign losses are considered more serious than overdesign losses, then the choice of one of the less biased methods may be wise.
Carlos A. Gonzalez-Benecke; Eric J. Jokela; Wendell P. Cropper; Rosvel Bracho; Daniel J. Leduc
2014-01-01
The forest simulation model, 3-PG, has been widely applied as a useful tool for predicting growth of forest species in many countries. The model has the capability to estimate the effects of management, climate and site characteristics on many stand attributes using easily available data. Currently, there is an increasing interest in estimating biomass and assessing...
NASA Astrophysics Data System (ADS)
Landeras, Gorka; Bekoe, Emmanuel; Ampofo, Joseph; Logah, Frederick; Diop, Mbaye; Cisse, Madiama; Shiri, Jalal
2018-05-01
Accurate estimation of reference evapotranspiration ( ET 0 ) is essential for the computation of crop water requirements, irrigation scheduling, and water resources management. In this context, having a battery of alternative local calibrated ET 0 estimation methods is of great interest for any irrigation advisory service. The development of irrigation advisory services will be a major breakthrough for West African agriculture. In the case of many West African countries, the high number of meteorological inputs required by the Penman-Monteith equation has been indicated as constraining. The present paper investigates for the first time in Ghana, the estimation ability of artificial intelligence-based models (Artificial Neural Networks (ANNs) and Gene Expression Programing (GEPs)), and ancillary/external approaches for modeling reference evapotranspiration ( ET 0 ) using limited weather data. According to the results of this study, GEPs have emerged as a very interesting alternative for ET 0 estimation at all the locations of Ghana which have been evaluated in this study under different scenarios of meteorological data availability. The adoption of ancillary/external approaches has been also successful, moreover in the southern locations. The interesting results obtained in this study using GEPs and some ancillary approaches could be a reference for future studies about ET 0 estimation in West Africa.
Modeling Of In-Vehicle Human Exposure to Ambient Fine Particulate Matter
Liu, Xiaozhen; Frey, H. Christopher
2012-01-01
A method for estimating in-vehicle PM2.5 exposure as part of a scenario-based population simulation model is developed and assessed. In existing models, such as the Stochastic Exposure and Dose Simulation model for Particulate Matter (SHEDS-PM), in-vehicle exposure is estimated using linear regression based on area-wide ambient PM2.5 concentration. An alternative modeling approach is explored based on estimation of near-road PM2.5 concentration and an in-vehicle mass balance. Near-road PM2.5 concentration is estimated using a dispersion model and fixed site monitor (FSM) data. In-vehicle concentration is estimated based on air exchange rate and filter efficiency. In-vehicle concentration varies with road type, traffic flow, windspeed, stability class, and ventilation. Average in-vehicle exposure is estimated to contribute 10 to 20 percent of average daily exposure. The contribution of in-vehicle exposure to total daily exposure can be higher for some individuals. Recommendations are made for updating exposure models and implementation of the alternative approach. PMID:23101000
A robust bayesian estimate of the concordance correlation coefficient.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
A need for assessment of agreement arises in many situations including statistical biomarker qualification or assay or method validation. Concordance correlation coefficient (CCC) is one of the most popular scaled indices reported in evaluation of agreement. Robust methods for CCC estimation currently present an important statistical challenge. Here, we propose a novel Bayesian method of robust estimation of CCC based on multivariate Student's t-distribution and compare it with its alternatives. Furthermore, we extend the method to practically relevant settings, enabling incorporation of confounding covariates and replications. The superiority of the new approach is demonstrated using simulation as well as real datasets from biomarker application in electroencephalography (EEG). This biomarker is relevant in neuroscience for development of treatments for insomnia.
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
New methods of testing nonlinear hypothesis using iterative NLLS estimator
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.
Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J
2016-03-01
To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.
Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers
Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.
2004-01-01
LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.
Estimating canopy cover from standard forest inventory measurements in western Oregon
Anne McIntosh; Andrew Gray; Steven. Garman
2012-01-01
Reliable measures of canopy cover are important in the management of public and private forests. However, direct sampling of canopy cover is both labor- and time-intensive. More efficient methods for estimating percent canopy cover could be empirically derived relationships between more readily measured stand attributes and canopy cover or, alternatively, the use of...
Estimation of the relative influence of climate change, compared to other human activities, on dynamics of Pacific salmon (Oncorhynchus spp.) populations can help management agencies take appropriate management actions. We used empirically based simulation modelling of 48 sockeye...
Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris
Michael S. Williams; Jeffrey H. Gove
2003-01-01
Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...
Su, Xiaogang; Peña, Annette T; Liu, Lei; Levine, Richard A
2018-04-29
Assessing heterogeneous treatment effects is a growing interest in advancing precision medicine. Individualized treatment effects (ITEs) play a critical role in such an endeavor. Concerning experimental data collected from randomized trials, we put forward a method, termed random forests of interaction trees (RFIT), for estimating ITE on the basis of interaction trees. To this end, we propose a smooth sigmoid surrogate method, as an alternative to greedy search, to speed up tree construction. The RFIT outperforms the "separate regression" approach in estimating ITE. Furthermore, standard errors for the estimated ITE via RFIT are obtained with the infinitesimal jackknife method. We assess and illustrate the use of RFIT via both simulation and the analysis of data from an acupuncture headache trial. Copyright © 2018 John Wiley & Sons, Ltd.
Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data
NASA Astrophysics Data System (ADS)
Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.
2015-06-01
In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.
Shrinkage regression-based methods for microarray missing value imputation.
Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng
2013-01-01
Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.
Measuring use value from recreation participation: comment
Donald B.K. English; J. Michael Bowker
1994-01-01
In a recent article in this Journal, Whitehead (1 992) presents a method for estimating annual economic surplus for recreation trips to a natural resource site based on whether an individual participates in recreation at that site. Whitehead proposes his method as an alternative to the traditional two-stage travel cost approach. We contend that Whitehead's method...
Beyond Hammers and Nails: Mitigating and Verifying Greenhouse Gas Emissions
NASA Astrophysics Data System (ADS)
Gurney, Kevin Robert
2013-05-01
One of the biggest challenges to future international agreements on climate change is an independent, science-driven method of verifying reductions in greenhouse gas emissions (GHG) [Niederberger and Kimble, 2011]. The scientific community has thus far emphasized atmospheric measurements to assess changes in emissions. An alternative is direct measurement or estimation of fluxes at the source. Given the many challenges facing the approach that uses "top-down" atmospheric measurements and recent advances in "bottom-up" estimation methods, I challenge the current doctrine, which has the atmospheric measurement approach "validating" bottom-up, "good-faith" emissions estimation [Balter, 2012] or which holds that the use of bottom-up estimation is like "dieting without weighing oneself" [Nisbet and Weiss, 2010].
Tissue thickness calculation in ocular optical coherence tomography
Alonso-Caneiro, David; Read, Scott A.; Vincent, Stephen J.; Collins, Michael J.; Wojtkowski, Maciej
2016-01-01
Thickness measurements derived from optical coherence tomography (OCT) images of the eye are a fundamental clinical and research metric, since they provide valuable information regarding the eye’s anatomical and physiological characteristics, and can assist in the diagnosis and monitoring of numerous ocular conditions. Despite the importance of these measurements, limited attention has been given to the methods used to estimate thickness in OCT images of the eye. Most current studies employing OCT use an axial thickness metric, but there is evidence that axial thickness measures may be biased by tilt and curvature of the image. In this paper, standard axial thickness calculations are compared with a variety of alternative metrics for estimating tissue thickness. These methods were tested on a data set of wide-field chorio-retinal OCT scans (field of view (FOV) 60° x 25°) to examine their performance across a wide region of interest and to demonstrate the potential effect of curvature of the posterior segment of the eye on the thickness estimates. Similarly, the effect of image tilt was systematically examined with the same range of proposed metrics. The results demonstrate that image tilt and curvature of the posterior segment can affect axial tissue thickness calculations, while alternative metrics, which are not biased by these effects, should be considered. This study demonstrates the need to consider alternative methods to calculate tissue thickness in order to avoid measurement error due to image tilt and curvature. PMID:26977367
Cost-effectiveness of the stream-gaging program in Missouri
Waite, L.A.
1987-01-01
This report documents the results of an evaluation of the cost effectiveness of the 1986 stream-gaging program in Missouri. Alternative methods of developing streamflow information and cost-effective resource allocation were used to evaluate the Missouri program. Alternative methods were considered statewide, but the cost effective resource allocation study was restricted to the area covered by the Rolla field headquarters. The average standard error of estimate for records of instantaneous discharge was 17 percent; assuming the 1986 budget and operating schedule, it was shown that this overall degree of accuracy could be improved to 16 percent by altering the 1986 schedule of station visitations. A minimum budget of $203,870, with a corresponding average standard error of estimate 17 percent, is required to operate the 1986 program for the Rolla field headquarters; a budget of less than this would not permit proper service and maintenance of the stations or adequate definition of stage-discharge relations. The maximum budget analyzed was $418,870, which resulted in an average standard error of estimate of 14 percent. Improved instrumentation can have a positive effect on streamflow uncertainties by decreasing lost records. An earlier study of data uses found that data uses were sufficient to justify continued operation of all stations. One of the stations investigated, Current River at Doniphan (07068000) was suitable for the application of alternative methods for simulating discharge records. However, the station was continued because of data use requirements. (Author 's abstract)
Evaluation of Alternative Difference-in-Differences Methods
ERIC Educational Resources Information Center
Yu, Bing
2013-01-01
Difference-in-differences (DID) strategies are particularly useful for evaluating policy effects in natural experiments in which, for example, a policy affects some schools and students but not others. However, the standard DID method may produce biased estimation of the policy effect if the confounding effect of concurrent events varies by…
40 CFR 60.463 - Performance test and compliance provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operator shall use the following procedures for determining monthly volume-weighted average emissions of... Method 24 or an equivalent or alternative method. The owner or operator shall determine the volume of... facilities, the owner or operator shall estimate the volume of coating used at each affected facility by...
Dutch population specific sex estimation formulae using the proximal femur.
Colman, K L; Janssen, M C L; Stull, K E; van Rijn, R R; Oostra, R J; de Boer, H H; van der Merwe, A E
2018-05-01
Sex estimation techniques are frequently applied in forensic anthropological analyses of unidentified human skeletal remains. While morphological sex estimation methods are able to endure population differences, the classification accuracy of metric sex estimation methods are population-specific. No metric sex estimation method currently exists for the Dutch population. The purpose of this study is to create Dutch population specific sex estimation formulae by means of osteometric analyses of the proximal femur. Since the Netherlands lacks a representative contemporary skeletal reference population, 2D plane reconstructions, derived from clinical computed tomography (CT) data, were used as an alternative source for a representative reference sample. The first part of this study assesses the intra- and inter-observer error, or reliability, of twelve measurements of the proximal femur. The technical error of measurement (TEM) and relative TEM (%TEM) were calculated using 26 dry adult femora. In addition, the agreement, or accuracy, between the dry bone and CT-based measurements was determined by percent agreement. Only reliable and accurate measurements were retained for the logistic regression sex estimation formulae; a training set (n=86) was used to create the models while an independent testing set (n=28) was used to validate the models. Due to high levels of multicollinearity, only single variable models were created. Cross-validated classification accuracies ranged from 86% to 92%. The high cross-validated classification accuracies indicate that the developed formulae can contribute to the biological profile and specifically in sex estimation of unidentified human skeletal remains in the Netherlands. Furthermore, the results indicate that clinical CT data can be a valuable alternative source of data when representative skeletal collections are unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.
Hone, J.; Pech, R.; Yip, P.
1992-01-01
Infectious diseases establish in a population of wildlife hosts when the number of secondary infections is greater than or equal to one. To estimate whether establishment will occur requires extensive experience or a mathematical model of disease dynamics and estimates of the parameters of the disease model. The latter approach is explored here. Methods for estimating key model parameters, the transmission coefficient (beta) and the basic reproductive rate (RDRS), are described using classical swine fever (hog cholera) in wild pigs as an example. The tentative results indicate that an acute infection of classical swine fever will establish in a small population of wild pigs. Data required for estimation of disease transmission rates are reviewed and sources of bias and alternative methods discussed. A comprehensive evaluation of the biases and efficiencies of the methods is needed. PMID:1582476
On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.
Westgate, Philip M; Burchett, Woodrow W
2017-03-15
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris
Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey
2005-01-01
Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...
Dan Loeffler; David E. Calkin; Robin P. Silverstein
2006-01-01
Utilizing timber harvest residues (biomass) for renewable energy production provides an alternative disposal method to onsite burning that may improve the economic viability of hazardous fuels treatments. Due to the relatively low value of biomass, accurate estimates of biomass volumes and costs of collection and delivery are essential if investment in renewable energy...
An expert system for estimating production rates and costs for hardwood group-selection harvests
Chris B. LeDoux; B. Gopalakrishnan; R. S. Pabba
2003-01-01
As forest managers shift their focus from stands to entire ecosystems alternative harvesting methods such as group selection are being used increasingly. Results of several field time and motion studies and simulation runs were incorporated into an expert system for estimating production rates and costs associated with harvests of group-selection units of various size...
ERIC Educational Resources Information Center
Magis, David; Raiche, Gilles
2010-01-01
In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…
Alternative (non-animal) methods for cosmetics testing: current status and future prospects-2010.
Adler, Sarah; Basketter, David; Creton, Stuart; Pelkonen, Olavi; van Benthem, Jan; Zuang, Valérie; Andersen, Klaus Ejner; Angers-Loustau, Alexandre; Aptula, Aynur; Bal-Price, Anna; Benfenati, Emilio; Bernauer, Ulrike; Bessems, Jos; Bois, Frederic Y; Boobis, Alan; Brandon, Esther; Bremer, Susanne; Broschard, Thomas; Casati, Silvia; Coecke, Sandra; Corvi, Raffaella; Cronin, Mark; Daston, George; Dekant, Wolfgang; Felter, Susan; Grignard, Elise; Gundert-Remy, Ursula; Heinonen, Tuula; Kimber, Ian; Kleinjans, Jos; Komulainen, Hannu; Kreiling, Reinhard; Kreysa, Joachim; Leite, Sofia Batista; Loizou, George; Maxwell, Gavin; Mazzatorta, Paolo; Munn, Sharon; Pfuhler, Stefan; Phrakonkham, Pascal; Piersma, Aldert; Poth, Albrecht; Prieto, Pilar; Repetto, Guillermo; Rogiers, Vera; Schoeters, Greet; Schwarz, Michael; Serafimova, Rositsa; Tähti, Hanna; Testai, Emanuela; van Delft, Joost; van Loveren, Henk; Vinken, Mathieu; Worth, Andrew; Zaldivar, José-Manuel
2011-05-01
The 7th amendment to the EU Cosmetics Directive prohibits to put animal-tested cosmetics on the market in Europe after 2013. In that context, the European Commission invited stakeholder bodies (industry, non-governmental organisations, EU Member States, and the Commission's Scientific Committee on Consumer Safety) to identify scientific experts in five toxicological areas, i.e. toxicokinetics, repeated dose toxicity, carcinogenicity, skin sensitisation, and reproductive toxicity for which the Directive foresees that the 2013 deadline could be further extended in case alternative and validated methods would not be available in time. The selected experts were asked to analyse the status and prospects of alternative methods and to provide a scientifically sound estimate of the time necessary to achieve full replacement of animal testing. In summary, the experts confirmed that it will take at least another 7-9 years for the replacement of the current in vivo animal tests used for the safety assessment of cosmetic ingredients for skin sensitisation. However, the experts were also of the opinion that alternative methods may be able to give hazard information, i.e. to differentiate between sensitisers and non-sensitisers, ahead of 2017. This would, however, not provide the complete picture of what is a safe exposure because the relative potency of a sensitiser would not be known. For toxicokinetics, the timeframe was 5-7 years to develop the models still lacking to predict lung absorption and renal/biliary excretion, and even longer to integrate the methods to fully replace the animal toxicokinetic models. For the systemic toxicological endpoints of repeated dose toxicity, carcinogenicity and reproductive toxicity, the time horizon for full replacement could not be estimated.
Zhang, Zhiyong; Yuan, Ke-Hai
2016-06-01
Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega.
Are rapid population estimates accurate? A field trial of two different assessment methods.
Grais, Rebecca F; Coulombier, Denis; Ampuero, Julia; Lucas, Marcelino E S; Barretto, Avertino T; Jacquier, Guy; Diaz, Francisco; Balandine, Serge; Mahoudeau, Claude; Brown, Vincent
2006-09-01
Emergencies resulting in large-scale displacement often lead to populations resettling in areas where basic health services and sanitation are unavailable. To plan relief-related activities quickly, rapid population size estimates are needed. The currently recommended Quadrat method estimates total population by extrapolating the average population size living in square blocks of known area to the total site surface. An alternative approach, the T-Square, provides a population estimate based on analysis of the spatial distribution of housing units taken throughout a site. We field tested both methods and validated the results against a census in Esturro Bairro, Beira, Mozambique. Compared to the census (population: 9,479), the T-Square yielded a better population estimate (9,523) than the Quadrat method (7,681; 95% confidence interval: 6,160-9,201), but was more difficult for field survey teams to implement. Although applicable only to similar sites, several general conclusions can be drawn for emergency planning.
NASA Astrophysics Data System (ADS)
Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash
2017-10-01
Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.
Kosmowski, Frédéric; Stevenson, James; Campbell, Jeff; Ambel, Alemayehu; Haile Tsegay, Asmelash
2017-10-01
Maintaining permanent coverage of the soil using crop residues is an important and commonly recommended practice in conservation agriculture. Measuring this practice is an essential step in improving knowledge about the adoption and impact of conservation agriculture. Different data collection methods can be implemented to capture the field level crop residue coverage for a given plot, each with its own implication on survey budget, implementation speed and respondent and interviewer burden. In this paper, six alternative methods of crop residue coverage measurement are tested among the same sample of rural households in Ethiopia. The relative accuracy of these methods are compared against a benchmark, the line-transect method. The alternative methods compared against the benchmark include: (i) interviewee (respondent) estimation; (ii) enumerator estimation visiting the field; (iii) interviewee with visual-aid without visiting the field; (iv) enumerator with visual-aid visiting the field; (v) field picture collected with a drone and analyzed with image-processing methods and (vi) satellite picture of the field analyzed with remote sensing methods. Results of the methodological experiment show that survey-based methods tend to underestimate field residue cover. When quantitative data on cover are needed, the best estimates are provided by visual-aid protocols. For categorical analysis (i.e., >30% cover or not), visual-aid protocols and remote sensing methods perform equally well. Among survey-based methods, the strongest correlates of measurement errors are total farm size, field size, distance, and slope. Results deliver a ranking of measurement options that can inform survey practitioners and researchers.
Comparison of methods used to estimate numbers of walruses on sea ice
Udevitz, Mark S.; Gilbert, James R.; Fedoseev, Gennadii A.
2001-01-01
The US and former USSR conducted joint surveys of Pacific walruses on sea ice and at land haul-outs in 1975, 1980, 1985, and 1990. One of the difficulties in interpreting results of these surveys has been that, except for the 1990 survey, the Americans and Soviets used different methods for estimating population size from their respective portions of the sea ice data. We used data exchanged between Soviet and American scientists to compare and evaluate the two estimation procedures and to derive a set of alternative estimates from the 1975, 1980, and 1985 surveys based on a single consistent procedure. Estimation method had only a small effect on total population estimates because most walruses were found at land haul-outs. However, the Soviet method is subject to bias that depends on the distribution of the population on the sea ice and this has important implications for interpreting the ice portions of previously reported surveys for walruses and other pinniped species. We recommend that the American method be used in future surveys. Future research on survey methods for walruses should focus on other potential sources of bias and variation.
Surveying immigrants without sampling frames - evaluating the success of alternative field methods.
Reichel, David; Morales, Laura
2017-01-01
This paper evaluates the sampling methods of an international survey, the Immigrant Citizens Survey, which aimed at surveying immigrants from outside the European Union (EU) in 15 cities in seven EU countries. In five countries, no sample frame was available for the target population. Consequently, alternative ways to obtain a representative sample had to be found. In three countries 'location sampling' was employed, while in two countries traditional methods were used with adaptations to reach the target population. The paper assesses the main methodological challenges of carrying out a survey among a group of immigrants for whom no sampling frame exists. The samples of the survey in these five countries are compared to results of official statistics in order to assess the accuracy of the samples obtained through the different sampling methods. It can be shown that alternative sampling methods can provide meaningful results in terms of core demographic characteristics although some estimates differ to some extent from the census results.
Estimating neural response functions from fMRI
Kumar, Sukhbinder; Penny, William
2014-01-01
This paper proposes a methodology for estimating Neural Response Functions (NRFs) from fMRI data. These NRFs describe non-linear relationships between experimental stimuli and neuronal population responses. The method is based on a two-stage model comprising an NRF and a Hemodynamic Response Function (HRF) that are simultaneously fitted to fMRI data using a Bayesian optimization algorithm. This algorithm also produces a model evidence score, providing a formal model comparison method for evaluating alternative NRFs. The HRF is characterized using previously established “Balloon” and BOLD signal models. We illustrate the method with two example applications based on fMRI studies of the auditory system. In the first, we estimate the time constants of repetition suppression and facilitation, and in the second we estimate the parameters of population receptive fields in a tonotopic mapping study. PMID:24847246
Deceiving proteins! A case of lymphoma and high creatinine.
Metraiah, El Hakem Abdelkarim; Regan, Helen; Louw, Johanna; Kidder, Dana
2017-01-23
Estimation of kidney function by measuring serum creatinine is one the commonest laboratory tests conducted in clinical practice. Enzymatic methods are often used to measure serum creatinine. Clinicians should be aware of the limitations of these methods, such as test interference with paraproteins.We present a case of falsely elevated serum creatinine in a patient referred for renal biopsy. The combination of fluctuating creatinine and normal blood urea level was unusual. Serum protein electrophoresis revealed the presence of an IgM paraprotein. Further investigations confirmed an underlying diagnosis of lymphoplasmacytoid lymphoma. This case highlights how IgM paraprotein can interfere with creatinine estimation by enzymatic assay and the utility of alternative methods of estimating serum creatinine. 2017 BMJ Publishing Group Ltd.
Karami, Manoochehr; Khazaei, Salman; Poorolajal, Jalal; Soltanian, Alireza; Sajadipoor, Mansour
2017-08-01
There is no reliable estimate of the size of female sex workers (FSWs). This study aimed to estimate the size of FSWs in south of Tehran, Iran in 2016 using direct capture-recapture method. In the capture phase, the hangouts of FSWs were mapped as their meeting places. FSWs who agreed to participate in the study tagged with a T-shirt. The recapture phase was implemented at the same places tagging FSWs with a blue bracelet. The total estimated size of FSWs was 690 (95% CI 633, 747). About 89.43% of FSWs experienced sexual intercourse prior to age 20. The prevalence of human immunodeficiency virus infection among FSWs was 4.60%. The estimated population size of FSWs was much more than our expectation. This issue must be the focus of special attention for planning prevention strategies. However, alternative estimates require to estimating the number FSWs, reliably.
Pailian, Hrag; Halberda, Justin
2015-04-01
We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.
THINEX - an expert system for estimating forest harvesting productivity and cost
C. B. LeDoux; B. Gopalakrishnan; R. S. Pabba
1998-01-01
As the emphasis of forest stand management shifts towards implementing ecosystem management, managers are examining alternative methods to harvesting stands in order to accomplish multiple objectives by using techniques such as shelterwood harvests, thinnings, and group selection methods, thus leaving more residual trees to improve the visual quality of the harvested...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-01
... by LGL Ltd., Environmental Research Associates (LGL), on behalf of NSF and L-DEO. The NMFS Biological... must set forth the permissible methods of taking, other means of effecting the least practicable... scientific information and estimation methodology. The alternative method of conducting site-specific...
A Probability Based Framework for Testing the Missing Data Mechanism
ERIC Educational Resources Information Center
Lin, Johnny Cheng-Han
2013-01-01
Many methods exist for imputing missing data but fewer methods have been proposed to test the missing data mechanism. Little (1988) introduced a multivariate chi-square test for the missing completely at random data mechanism (MCAR) that compares observed means for each pattern with expectation-maximization (EM) estimated means. As an alternative,…
ERIC Educational Resources Information Center
Hagedorn, Linda Serra
1998-01-01
A study explored two distinct methods of calculating a precise measure of gender-based wage differentials among college faculty. The first estimation considered wage differences using a formula based on human capital; the second included compensation for past discriminatory practices. Both measures were used to predict three specific aspects of…
Persons Camp Using Interpolation Method
NASA Astrophysics Data System (ADS)
Tawfiq, Luma Naji Mohammed; Najm Abood, Israa
2018-05-01
The aim of this paper is to estimate the rate of contaminated soils by using suitable interpolation method as an alternative accurate tool to evaluate the concentration of heavy metals in soil then compared with standard universal value to determine the rate of contamination in the soil. In particular, interpolation methods are extensively applied in the models of the different phenomena where experimental data must be used in computer studies where expressions of those data are required. In this paper the extended divided difference method in two dimensions is used to solve suggested problem. Then, the modification method is applied to estimate the rate of contaminated soils of displaced persons camp in Diyala Governorate, in Iraq.
Estimating scattered and absorbed radiation in plant canopies by remote sensing
NASA Technical Reports Server (NTRS)
Daughtry, G. S. T.; Ranson, K. J.
1987-01-01
Several research avenues are summarized. The relationships of canopy characteristics to multispectral reflectance factors of vegetation are reviewed. Several alternative approaches for incorporating spectrally derived information into plant models are discussed, using corn as the main example. A method is described and evaluated whereby a leaf area index is estimated from measurements of radiation transmitted through plant canopies, using soybeans as an example. Albedo of a big bluestem grass canopy is estimated from 60 directional reflectance factor measurements. Effects of estimating albedo with substantially smaller subsets of data are evaluated.
ITALICS: an algorithm for normalization and DNA copy number calling for Affymetrix SNP arrays.
Rigaill, Guillem; Hupé, Philippe; Almeida, Anna; La Rosa, Philippe; Meyniel, Jean-Philippe; Decraene, Charles; Barillot, Emmanuel
2008-03-15
Affymetrix SNP arrays can be used to determine the DNA copy number measurement of 11 000-500 000 SNPs along the genome. Their high density facilitates the precise localization of genomic alterations and makes them a powerful tool for studies of cancers and copy number polymorphism. Like other microarray technologies it is influenced by non-relevant sources of variation, requiring correction. Moreover, the amplitude of variation induced by non-relevant effects is similar or greater than the biologically relevant effect (i.e. true copy number), making it difficult to estimate non-relevant effects accurately without including the biologically relevant effect. We addressed this problem by developing ITALICS, a normalization method that estimates both biological and non-relevant effects in an alternate, iterative manner, accurately eliminating irrelevant effects. We compared our normalization method with other existing and available methods, and found that ITALICS outperformed these methods for several in-house datasets and one public dataset. These results were validated biologically by quantitative PCR. The R package ITALICS (ITerative and Alternative normaLIzation and Copy number calling for affymetrix Snp arrays) has been submitted to Bioconductor.
Asquith, William H.; Thompson, David B.
2008-01-01
The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.
NASA Astrophysics Data System (ADS)
Cai, Jianhua
2017-05-01
The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.
An Alternative Default Soil Organic Carbon Method for National GHG Inventory Reporting to the UNFCCC
NASA Astrophysics Data System (ADS)
Ogle, S. M.; Gurung, R.; Klepfer, A.; Spencer, S.; Breidt, J.
2016-12-01
Estimating soil organic C stocks is challenging because of the large amount of data needed to evaluate the impact of land use and management on this terrestrial C pool. Moreover, some of the required data are rarely collected by governments through surveys programs, and are not typically available in remote sensing products. Examples include data on organic amendments, cover crops, crop rotation sequences, vegetated fallows, and fertilization practices. Due to these difficulties, only about 20% of the countries report soil organic C stock changes in their national communications to the UNFCCC. Yet, C sequestration in soils represents one of the least expensive options for reducing greenhouse gas emissions, and has the largest potential for mitigation in the agricultural sector. In order to facilitate reporting, we developed an alternative approach to the current default method provided by the Intergovernmental Panel on Climate Change (IPCC) for estimating soil organic C stock changes in mineral soils. The alternative method estimates the steady-state C stocks for a three pool model given annual crop yields or net primary production as the main input, along with monthly average temperature, total precipitation and soil texture data. Yield data are commonly available in a national agricultural census, and global datasets exists with adequate data for weather and soil texture if national datasets are not available. Tillage and irrigation data are also needed to address the impact of these practices on decomposition rates. The change in steady-state stocks is assumed to occur over a few decades. A Bayesian analysis framework has been developed to derive probability distribution functions for the parameters, and the method is being applied in a global analysis of soil organic carbon stock changes.
Quan, Hui; Zhang, Ji
2003-09-15
Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.
Wrinkle Ridge Detachment Depth and Undetected Shortening at Solis Planum, Mars
NASA Astrophysics Data System (ADS)
Colton, S. L.; Smart, K. J.; Ferrill, D. A.
2006-03-01
Martian wrinkle ridges have estimated detachment depths of 0.25 to 60 km. Our alternative method for determining detachment depth reveals differences and has implications for the predominant scale of deformation at Solis Planum.
Estimating conditional proportion curves by regression residuals.
Han, Bing; Lim, Nelson
2010-06-15
Researchers often derive a categorical outcome from an observed continuous measurement y. For example, human obesity status can be defined by the body mass index. They proceed to estimate the conditional proportion curve p(x) = P(y
Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.
Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo
2015-12-01
In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.
Paule‐Mandel estimators for network meta‐analysis with random inconsistency effects
Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose
2017-01-01
Network meta‐analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta‐analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between‐study heterogeneity. Models for network meta‐analysis with random inconsistency effects have the dual aim of allowing for inconsistencies and estimating average treatment effects across the whole network. To date, two classical estimation methods for fitting this type of model have been developed: a method of moments that extends DerSimonian and Laird's univariate method and maximum likelihood estimation. However, the Paule and Mandel estimator is another recommended classical estimation method for univariate meta‐analysis. In this paper, we extend the Paule and Mandel method so that it can be used to fit models for network meta‐analysis with random inconsistency effects. We apply all three estimation methods to a variety of examples that have been used previously and we also examine a challenging new dataset that is highly heterogenous. We perform a simulation study based on this new example. We find that the proposed Paule and Mandel method performs satisfactorily and generally better than the previously proposed method of moments because it provides more accurate inferences. Furthermore, the Paule and Mandel method possesses some advantages over likelihood‐based methods because it is both semiparametric and requires no convergence diagnostics. Although restricted maximum likelihood estimation remains the gold standard, the proposed methodology is a fully viable alternative to this and other estimation methods. PMID:28585257
NASA Astrophysics Data System (ADS)
Tiecher, Tales; Caner, Laurent; Gomes Minella, Jean Paolo; Henrique Ciotti, Lucas; Antônio Bender, Marcos; dos Santos Rheinheimer, Danilo
2014-05-01
Conventional fingerprinting methods based on geochemical composition still require a time-consuming and critical preliminary sample preparation. Thus, fingerprinting characteristics that can be measured in a rapid and cheap way requiring a minimal sample preparation, such as spectroscopy methods, should be used. The present study aimed to evaluate the sediment sources contribution in a rural catchment by using conventional method based on geochemical composition and on an alternative method based on near-infrared spectroscopy. This study was carried out in a rural catchment with an area of 1,19 km2 located in southern Brazil. The sediment sources evaluated were crop fields (n=20), unpaved roads (n=10) and stream channels (n=10). Thirty suspended sediment samples were collected from eight significant storm runoff events between 2009 and 2011. Sources and sediment samples were dried at 50oC and sieved at 63 µm. The total concentration of Ag, As, B, Ba, Be, Ca, Cd, Co, Cr, Cu, Fe, K, La, Li, Mg, Mn, Mo, Na, Ni, P, Pb, Sb, Se, Sr, Ti, Tl, V and Zn were estimated by ICP-OES after microwave assisted digestion with concentrated HNO3 and HCl. Total organic carbon (TOC) was estimated by wet oxidation with K2Cr2O7 and H2SO4. The near-infrared spectra scan range was 4000 to 10000 cm-1 at a resolution of 2 cm-1, with 100 co added scans per spectrum. The steps used in the conventional method were: i) tracer selection based on Kruskal-Wallis test, ii) selection of the best set of tracers using discriminant analyses and finally iii) the use of a mixed linear model to calculate the sediment sources contribution. The steps used in the alternative method were i) principal component analyses to reduce the number of variables, ii) discriminant analyses to determine the tracer potential of the near-infrared spectroscopy, and finally iii) the use of past least square based on 48 mixtures of the sediment sources in various weight proportions to calculate the sediment sources contribution. Both conventional and alternative methods were capable to discriminate 100% of the sediment sources. Conventional fingerprinting method provided a sediment sources contribution of 33±19% by crop fields, 25±13% by unpaved roads and 42±19% by stream channels. The contribution of sediment sources obtained by alternative fingerprinting method using near-infrared spectroscopy was 71±22% of crop fields, 21±12% of unpaved roads and 14±19% of stream channels. No correlation was observed between source contribution assessed by the two methods. Notwithstanding, the average contribution of the unpaved roads was similar by both methods. The highest difference in the average contribution of crop fields and stream channels estimated by the two methods was due to similar organic matter content of these two sediment sources which hampers their discrimination by assessing the near-infrared spectra, where much of the bands are highly correlated with the TOC levels. Efforts should be taken to try to combine both the geochemical composition and near-infrared spectroscopy information on a single estimative of the sediment sources contribution.
Chang, Ellen T; Lau, Edmund C; Van Landingham, Cynthia; Crump, Kenny S; McClellan, Roger O; Moolgavkar, Suresh H
2018-06-01
The Diesel Exhaust in Miners Study (DEMS) (United States, 1947-1997) reported positive associations between diesel engine exhaust exposure, estimated as respirable elemental carbon (REC), and lung cancer mortality. This reanalysis of the DEMS cohort used an alternative estimate of REC exposure incorporating historical data on diesel equipment, engine horsepower, ventilation rates, and declines in particulate matter emissions per horsepower. Associations with cumulative REC and average REC intensity using the alternative REC estimate and other exposure estimates were generally attenuated compared with original DEMS REC estimates. Most findings were statistically nonsignificant; control for radon exposure substantially weakened associations with the original and alternative REC estimates. No association with original or alternative REC estimates was detected among miners who worked exclusively underground. Positive associations were detected among limestone workers, whereas no association with REC or radon was found among workers in the other 7 mines. The differences in results based on alternative exposure estimates, control for radon, and stratification by worker location or mine type highlight areas of uncertainty in the DEMS data.
Kocur, Dušan; Švecová, Mária; Rovňáková, Jana
2013-01-01
In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered. PMID:24021968
Kocur, Dušan; Svecová, Mária; Rovňáková, Jana
2013-09-09
In the case of through-the-wall localization of moving targets by ultra wideband (UWB) radars, there are applications in which handheld sensors equipped only with one transmitting and two receiving antennas are applied. Sometimes, the radar using such a small antenna array is not able to localize the target with the required accuracy. With a view to improve through-the-wall target localization, cooperative positioning based on a fusion of data retrieved from two independent radar systems can be used. In this paper, the novel method of the cooperative localization referred to as joining intersections of the ellipses is introduced. This method is based on a geometrical interpretation of target localization where the target position is estimated using a properly created cluster of the ellipse intersections representing potential positions of the target. The performance of the proposed method is compared with the direct calculation method and two alternative methods of cooperative localization using data obtained by measurements with the M-sequence UWB radars. The direct calculation method is applied for the target localization by particular radar systems. As alternative methods of cooperative localization, the arithmetic average of the target coordinates estimated by two single independent UWB radars and the Taylor series method is considered.
Alternative evaluation metrics for risk adjustment methods.
Park, Sungchul; Basu, Anirban
2018-06-01
Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.
Legal recognition of same-sex couples and family formation.
Trandafir, Mircea
2015-02-01
It has long been debated how legalizing same-sex marriage would affect (different-sex) family formation. In this article, I use data on OECD member countries for the period 1980-2009 to examine the effects of the legal recognition of same-sex couples (through marriage or an alternative institution) on different-sex marriage, divorce, and extramarital births. Estimates from difference-in-difference models indicate that the introduction of same-sex marriage or of alternative institutions has no negative effects on family formation. These findings are robust to a multitude of specification checks, including the construction of counterfactuals using the synthetic control method. In addition, the country-by-country case studies provide evidence of homogeneity of the estimated effects.
Safikhani, Zhaleh; Sadeghi, Mehdi; Pezeshk, Hamid; Eslahchi, Changiz
2013-01-01
Recent advances in the sequencing technologies have provided a handful of RNA-seq datasets for transcriptome analysis. However, reconstruction of full-length isoforms and estimation of the expression level of transcripts with a low cost are challenging tasks. We propose a novel de novo method named SSP that incorporates interval integer linear programming to resolve alternatively spliced isoforms and reconstruct the whole transcriptome from short reads. Experimental results show that SSP is fast and precise in determining different alternatively spliced isoforms along with the estimation of reconstructed transcript abundances. The SSP software package is available at http://www.bioinf.cs.ipm.ir/software/ssp. © 2013.
Grummer, Jared A; Bryson, Robert W; Reeder, Tod W
2014-03-01
Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.
Wave-Based Algorithms and Bounds for Target Support Estimation
2015-05-15
vector electromagnetic formalism in [5]. This theory leads to three main variants of the optical theorem detector, in particular, three alternative...further expands the applicability for transient pulse change detection of ar- bitrary nonlinear-media and time-varying targets [9]. This report... electromagnetic methods a new methodology to estimate the minimum convex source region and the (possibly nonconvex) support of a scattering target from knowledge of
Philip Radtke; David Walker; Jereme Frank; Aaron Weiskittel; Clara DeYoung; David MacFarlane; Grant Domke; Christopher Woodall; John Coulston; James Westfall
2017-01-01
Accurate estimation of forest biomass and carbon stocks at regional to national scales is a key requirement in determining terrestrial carbon sources and sinks on United States (US) forest lands. To that end, comprehensive assessment and testing of alternative volume and biomass models were conducted for individual tree models employed in the component ratio method (...
Statistics of rain-rate estimates for a single attenuating radar
NASA Technical Reports Server (NTRS)
Meneghini, R.
1976-01-01
The effects of fluctuations in return power and the rain-rate/reflectivity relationship, are included in the estimates, as well as errors introduced in the attempt to recover the unattenuated return power. In addition to the Hitschfeld-Bordan correction, two alternative techniques are considered. The performance of the radar is shown to be dependent on the method by which attenuation correction is made.
Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.
Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.
A Study of Alternative Quantile Estimation Methods in Newsboy-Type Problems
1980-03-01
decision maker selects to have on hand. The newsboy cost equation may be formulated as a two-piece continuous linear function in the following manner. C(S...number of observations, some approximations may be possible. Three points which are near each other can be assumed to be linear and some estimator using...respectively. Define the value r as: r = [nq + 0.5] , (6) where [X] denotes the largest integer of X. Let us consider an estimate of X as the linear
Field assessment of alternative bed-load transport estimators
Gaeuman, G.; Jacobson, R.B.
2007-01-01
Measurement of near-bed sediment velocities with acoustic Doppler current profilers (ADCPs) is an emerging approach for quantifying bed-load sediment fluxes in rivers. Previous investigations of the technique have relied on conventional physical bed-load sampling to provide reference transport information with which to validate the ADCP measurements. However, physical samples are subject to substantial errors, especially under field conditions in which surrogate methods are most needed. Comparisons between ADCP bed velocity measurements with bed-load transport rates estimated from bed-form migration rates in the lower Missouri River show a strong correlation between the two surrogate measures over a wide range of mild to moderately intense sediment transporting conditions. The correlation between the ADCP measurements and physical bed-load samples is comparatively poor, suggesting that physical bed-load sampling is ineffective for ground-truthing alternative techniques in large sand-bed rivers. Bed velocities measured in this study became more variable with increasing bed-form wavelength at higher shear stresses. Under these conditions, bed-form dimensions greatly exceed the region of the bed ensonified by the ADCP, and the magnitude of the acoustic measurements depends on instrument location with respect to bed-form crests and troughs. Alternative algorithms for estimating bed-load transport from paired longitudinal profiles of bed topography were evaluated. An algorithm based on the routing of local erosion and deposition volumes that eliminates the need to identify individual bed forms was found to give results similar to those of more conventional dune-tracking methods. This method is particularly useful in cases where complex bed-form morphology makes delineation of individual bed forms difficult. ?? 2007 ASCE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, J.J. Jr.; Hyder, Z.
The Nguyen and Pinder method is one of four techniques commonly used for analysis of response data from slug tests. Limited field research has raised questions about the reliability of the parameter estimates obtained with this method. A theoretical evaluation of this technique reveals that errors were made in the derivation of the analytical solution upon which the technique is based. Simulation and field examples show that the errors result in parameter estimates that can differ from actual values by orders of magnitude. These findings indicate that the Nguyen and Pinder method should no longer be a tool in themore » repertoire of the field hydrogeologist. If data from a slug test performed in a partially penetrating well in a confined aquifer need to be analyzed, recent work has shown that the Hvorslev method is the best alternative among the commonly used techniques.« less
Framework for the Texas Highway Cost Allocation Study
DOT National Transportation Integrated Search
2001-01-01
In fiscal year 1998, Texas spent $2.8 billion on the state-maintained road network, which includes the Interstate highways. This project estimates the contribution to these costs of different vehicle classes. Alternative methods of breaking down ('al...
Optical flow estimation on image sequences with differently exposed frames
NASA Astrophysics Data System (ADS)
Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin
2015-09-01
Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.
Effect of caffeine ingestion on anaerobic capacity quantified by different methods
Arcoverde, Lucyana; Silveira, Rodrigo; Tomazini, Fabiano; Sansonio, André; Bertuzzi, Romulo; Andrade-Souza, Victor Amorim
2017-01-01
We investigated whether caffeine ingestion before submaximal exercise bouts would affect supramaximal oxygen demand and maximal accumulated oxygen deficit (MAOD), and if caffeine-induced improvement on the anaerobic capacity (AC) could be detected by different methods. Nine men took part in several submaximal and supramaximal exercise bouts one hour after ingesting caffeine (5 mg·kg-1) or placebo. The AC was estimated by MAOD, alternative MAOD, critical power, and gross efficiency methods. Caffeine had no effect on exercise endurance during the supramaximal bout (caffeine: 131.3 ± 21.9 and placebo: 130.8 ± 20.8 s, P = 0.80). Caffeine ingestion before submaximal trials did not affect supramaximal oxygen demand and MAOD compared to placebo (7.88 ± 1.56 L and 65.80 ± 16.06 kJ vs. 7.89 ± 1.30 L and 62.85 ± 13.67 kJ, P = 0.99). Additionally, MAOD was similar between caffeine and placebo when supramaximal oxygen demand was estimated without caffeine effects during submaximal bouts (67.02 ± 16.36 and 62.85 ± 13.67 kJ, P = 0.41) or when estimated by alternative MAOD (56.61 ± 8.49 and 56.87 ± 9.76 kJ, P = 0.91). The AC estimated by gross efficiency was also similar between caffeine and placebo (21.80 ± 3.09 and 20.94 ± 2.67 kJ, P = 0.15), but was lower in caffeine when estimated by critical power method (16.2 ± 2.6 vs. 19.3 ± 3.5 kJ, P = 0.03). In conclusion, caffeine ingestion before submaximal bouts did not affect supramaximal oxygen demand and consequently MAOD. Otherwise, caffeine seems to have no clear positive effect on AC. PMID:28617848
Improved remote gaze estimation using corneal reflection-adaptive geometric transforms
NASA Astrophysics Data System (ADS)
Ma, Chunfei; Baek, Seung-Jin; Choi, Kang-A.; Ko, Sung-Jea
2014-05-01
Recently, the remote gaze estimation (RGE) technique has been widely applied to consumer devices as a more natural interface. In general, the conventional RGE method estimates a user's point of gaze using a geometric transform, which represents the relationship between several infrared (IR) light sources and their corresponding corneal reflections (CRs) in the eye image. Among various methods, the homography normalization (HN) method achieves state-of-the-art performance. However, the geometric transform of the HN method requiring four CRs is infeasible for the case when fewer than four CRs are available. To solve this problem, this paper proposes a new RGE method based on three alternative geometric transforms, which are adaptive to the number of CRs. Unlike the HN method, the proposed method not only can operate with two or three CRs, but can also provide superior accuracy. To further enhance the performance, an effective error correction method is also proposed. By combining the introduced transforms with the error-correction method, the proposed method not only provides high accuracy and robustness for gaze estimation, but also allows for a more flexible system setup with a different number of IR light sources. Experimental results demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
Estimating pharmacy level prescription drug acquisition costs for third-party reimbursement.
Kreling, D H; Kirk, K W
1986-07-01
Accurate payment for the acquisition costs of drug products dispensed is an important consideration in a third-party prescription drug program. Two alternative methods of estimating these costs among pharmacies were derived and compared. First, pharmacists were surveyed to determine the purchase discounts offered to them by wholesalers. A 10.00% modal and 11.35% mean discount resulted for 73 responding pharmacists. Second, cost-plus percents derived from gross profit margins of wholesalers were calculated and applied to wholesaler product costs to estimate pharmacy level acquisition costs. Cost-plus percents derived from National Median and Southwestern Region wholesaler figures were 9.27% and 10.10%, respectively. A comparison showed the two methods of estimating acquisition costs would result in similar acquisition cost estimates. Adopting a cost-plus estimating approach is recommended because it avoids potential pricing manipulations by wholesalers and manufacturers that would negate improvements in drug product reimbursement accuracy.
Health insurance and use of alternative medicine in Mexico
van Gameren, Edwin
2014-01-01
Objectives I analyze the effect of coverage by health insurance on the use of alternative medicine such as folk healers and homeopaths, in particular if it complements or substitutes conventional services. Methods Panel data from the Mexican Health and Aging Study (MHAS) is used to estimate bivariate probit models in order to explain the use of alternative medicine while allowing the determinant of interest, access to health insurance, to be an endogenous factor. Results The findings indicate that households with insurance coverage less often use alternative medicine, and that the effect is much stronger among poor than among rich households. Conclusions Poor households substitute away from traditional medicine towards conventional medicine. PMID:20546965
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less
Rosenberry, D.O.; Sturrock, A.M.; Winter, T.C.
1993-01-01
Best estimates of evaporation at Williams Lake, north central Minnesota, were determined by the energy budget method using optimum sensors and optimum placement of sensors. These best estimates are compared with estimates derived from using substitute data to determine the effect of using less accurate sensors, simpler methods, or remotely measured data. Calculations were made for approximately biweekly periods during five open water seasons. For most of the data substitutions that affected the Bowen ratio, new values of evaporation differed little from best estimates. The three data substitution methods that caused the largest deviations from the best evaporation estimates were (1) using changes in the daily average surface water temperature as an indicator of the lake heat storage term, (2) using shortwave radiation, air temperature, and atmospheric vapor pressure data from a site 110 km away, and (3) using an analog surface water temperature probe. Recalculations based on these data substitutions resulted in differences from the best estimates as much as 89%, 21%, and 10%, respectively. The data substitution method that provided evaporation values that most closely matched the best estimates was measurement of the lake heat storage term at one location in the lake, rather than at 16 locations. Evaporation values resulting from this substitution method usually were within 2% of the best estimates.
Using a Linear Regression Method to Detect Outliers in IRT Common Item Equating
ERIC Educational Resources Information Center
He, Yong; Cui, Zhongmin; Fang, Yu; Chen, Hanwei
2013-01-01
Common test items play an important role in equating alternate test forms under the common item nonequivalent groups design. When the item response theory (IRT) method is applied in equating, inconsistent item parameter estimates among common items can lead to large bias in equated scores. It is prudent to evaluate inconsistency in parameter…
Alternative method to validate the seasonal land cover regions of the conterminous United States
Zhiliang Zhu; Donald O. Ohlen; Raymond L. Czaplewski; Robert E. Burgan
1996-01-01
An accuracy assessment method involving double sampling and the multivariate composite estimator has been used to validate the prototype seasonal land cover characteristics database of the conterminous United States. The database consists of 159 land cover classes, classified using time series of 1990 1-km satellite data and augmented with ancillary data including...
Boundary pint corrections for variable radius plots - simulation results
Margaret Penner; Sam Otukol
2000-01-01
The boundary plot problem is encountered when a forest inventory plot includes two or more forest conditions. Depending on the correction method used, the resulting estimates can be biased. The various correction alternatives are reviewed. No correction, area correction, half sweep, and toss-back methods are evaluated using simulation on an actual data set. Based on...
Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling
ERIC Educational Resources Information Center
Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.
2012-01-01
The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…
Nunes, Sheila Elke Araujo; Minamisava, Ruth; Vieira, Maria Aparecida da Silva; Itria, Alexander; Pessoa, Vicente Porfirio; de Andrade, Ana Lúcia Sampaio Sgambatti; Toscano, Cristiana Maria
2017-01-01
ABSTRACT Objective To determine and compare hospitalization costs of bacterial community-acquired pneumonia cases via different costing methods under the Brazilian Public Unified Health System perspective. Methods Cost-of-illness study based on primary data collected from a sample of 59 children aged between 28 days and 35 months and hospitalized due to bacterial pneumonia. Direct medical and non-medical costs were considered and three costing methods employed: micro-costing based on medical record review, micro-costing based on therapeutic guidelines and gross-costing based on the Brazilian Public Unified Health System reimbursement rates. Costs estimates obtained via different methods were compared using the Friedman test. Results Cost estimates of inpatient cases of severe pneumonia amounted to R$ 780,70/$Int. 858.7 (medical record review), R$ 641,90/$Int. 706.90 (therapeutic guidelines) and R$ 594,80/$Int. 654.28 (Brazilian Public Unified Health System reimbursement rates). Costs estimated via micro-costing (medical record review or therapeutic guidelines) did not differ significantly (p=0.405), while estimates based on reimbursement rates were significantly lower compared to estimates based on therapeutic guidelines (p<0.001) or record review (p=0.006). Conclusion Brazilian Public Unified Health System costs estimated via different costing methods differ significantly, with gross-costing yielding lower cost estimates. Given costs estimated by different micro-costing methods are similar and costing methods based on therapeutic guidelines are easier to apply and less expensive, this method may be a valuable alternative for estimation of hospitalization costs of bacterial community-acquired pneumonia in children. PMID:28767921
Gross, S; Janssen, S W J; de Vries, B; Terao, E; Daas, A; Buchheit, K-H
2010-07-01
An international collaborative study to validate 2 alternative in vitro methods for the potency testing of human tetanus immunoglobulin products was organised by the European Directorate for the Quality of Medicines & HealthCare (EDQM). The study, run in the framework of the Biological Standardisation Programme (BSP) under the aegis of the European Commission and the Council of Europe, involved 21 official medicines control and industry laboratories from 15 countries. Both methods, an enzyme-linked immunoassay (EIA) and a toxoid inhibition assay (TIA), showed good reproducibility, repeatability and precision. EIA and TIA discriminated between low, medium and high potency samples. Potency estimates correlated well and both values were in close agreement with those obtained by in vivo methods. Moreover, these alternative methods allowed to resolve discrepant results between laboratories that were due to product potency loss and reporting errors. The study demonstrated that EIA and TIA are suitable quality control methods for tetanus immunoglobulin, which can be standardised in a control laboratory using a quality assurance system. Consequently, the Group of Experts on Human Blood and Blood Products of the European Pharmacopoeia revised the monograph on human tetanus immunoglobulins to include both the methods as compendial alternatives to the in vivo mouse challenge assay. 2010 The International Association for Biologicals. Published by Elsevier Ltd. All rights reserved.
Chang, Ellen T; Lau, Edmund C; Van Landingham, Cynthia; Crump, Kenny S; McClellan, Roger O; Moolgavkar, Suresh H
2018-01-01
Abstract The Diesel Exhaust in Miners Study (DEMS) (United States, 1947–1997) reported positive associations between diesel engine exhaust exposure, estimated as respirable elemental carbon (REC), and lung cancer mortality. This reanalysis of the DEMS cohort used an alternative estimate of REC exposure incorporating historical data on diesel equipment, engine horsepower, ventilation rates, and declines in particulate matter emissions per horsepower. Associations with cumulative REC and average REC intensity using the alternative REC estimate and other exposure estimates were generally attenuated compared with original DEMS REC estimates. Most findings were statistically nonsignificant; control for radon exposure substantially weakened associations with the original and alternative REC estimates. No association with original or alternative REC estimates was detected among miners who worked exclusively underground. Positive associations were detected among limestone workers, whereas no association with REC or radon was found among workers in the other 7 mines. The differences in results based on alternative exposure estimates, control for radon, and stratification by worker location or mine type highlight areas of uncertainty in the DEMS data. PMID:29522073
Adkison, Milo D.; Peterman, R.M.
1996-01-01
Bayesian methods have been proposed to estimate optimal escapement goals, using both knowledge about physical determinants of salmon productivity and stock-recruitment data. The Bayesian approach has several advantages over many traditional methods for estimating stock productivity: it allows integration of information from diverse sources and provides a framework for decision-making that takes into account uncertainty reflected in the data. However, results can be critically dependent on details of implementation of this approach. For instance, unintended and unwarranted confidence about stock-recruitment relationships can arise if the range of relationships examined is too narrow, if too few discrete alternatives are considered, or if data are contradictory. This unfounded confidence can result in a suboptimal choice of a spawning escapement goal.
QUANTIFYING ALTERNATIVE SPLICING FROM PAIRED-END RNA-SEQUENCING DATA.
Rossell, David; Stephan-Otto Attolini, Camille; Kroiss, Manuel; Stöcker, Almond
2014-03-01
RNA-sequencing has revolutionized biomedical research and, in particular, our ability to study gene alternative splicing. The problem has important implications for human health, as alternative splicing may be involved in malfunctions at the cellular level and multiple diseases. However, the high-dimensional nature of the data and the existence of experimental biases pose serious data analysis challenges. We find that the standard data summaries used to study alternative splicing are severely limited, as they ignore a substantial amount of valuable information. Current data analysis methods are based on such summaries and are hence sub-optimal. Further, they have limited flexibility in accounting for technical biases. We propose novel data summaries and a Bayesian modeling framework that overcome these limitations and determine biases in a non-parametric, highly flexible manner. These summaries adapt naturally to the rapid improvements in sequencing technology. We provide efficient point estimates and uncertainty assessments. The approach allows to study alternative splicing patterns for individual samples and can also be the basis for downstream analyses. We found a several fold improvement in estimation mean square error compared popular approaches in simulations, and substantially higher consistency between replicates in experimental data. Our findings indicate the need for adjusting the routine summarization and analysis of alternative splicing RNA-seq studies. We provide a software implementation in the R package casper.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Seiger, Rene; Ganger, Sebastian; Kranz, Georg S; Hahn, Andreas; Lanzenberger, Rupert
2018-05-15
Automated cortical thickness (CT) measurements are often used to assess gray matter changes in the healthy and diseased human brain. The FreeSurfer software is frequently applied for this type of analysis. The computational anatomy toolbox (CAT12) for SPM, which offers a fast and easy-to-use alternative approach, was recently made available. In this study, we compared region of interest (ROI)-wise CT estimations of the surface-based FreeSurfer 6 (FS6) software and the volume-based CAT12 toolbox for SPM using 44 elderly healthy female control subjects (HC). In addition, these 44 HCs from the cross-sectional analysis and 34 age- and sex-matched patients with Alzheimer's disease (AD) were used to assess the potential of detecting group differences for each method. Finally, a test-retest analysis was conducted using 19 HC subjects. All data were taken from the OASIS database and MRI scans were recorded at 1.5 Tesla. A strong correlation was observed between both methods in terms of ROI mean CT estimates (R 2 = .83). However, CAT12 delivered significantly higher CT estimations in 32 of the 34 ROIs, indicating a systematic difference between both approaches. Furthermore, both methods were able to reliably detect atrophic brain areas in AD subjects, with the highest decreases in temporal areas. Finally, FS6 as well as CAT12 showed excellent test-retest variability scores. Although CT estimations were systematically higher for CAT12, this study provides evidence that this new toolbox delivers accurate and robust CT estimates and can be considered a fast and reliable alternative to FreeSurfer. © 2018 The Authors. Journal of Neuroimaging published by Wiley Periodicals, Inc. on behalf of American Society of Neuroimaging.
Jackson, Dan; White, Ian R; Riley, Richard D
2013-01-01
Multivariate meta-analysis is becoming more commonly used. Methods for fitting the multivariate random effects model include maximum likelihood, restricted maximum likelihood, Bayesian estimation and multivariate generalisations of the standard univariate method of moments. Here, we provide a new multivariate method of moments for estimating the between-study covariance matrix with the properties that (1) it allows for either complete or incomplete outcomes and (2) it allows for covariates through meta-regression. Further, for complete data, it is invariant to linear transformations. Our method reduces to the usual univariate method of moments, proposed by DerSimonian and Laird, in a single dimension. We illustrate our method and compare it with some of the alternatives using a simulation study and a real example. PMID:23401213
Assessment of the potential future market in Sweden for hydrogen as an energy carrier
NASA Astrophysics Data System (ADS)
Carleson, G.
Future hydrogen markets for the period 1980-2025 are projected, the probable range of hydrogen production costs for various manufacturing methods is estimated, and expected market shares in competition with alternative energy carriers are evaluated. A general scenario for economic and industrial development in Sweden for the given period was evaluated, showing the average increase in gross national product to become 1.6% per year. Three different energy scenarios were then developed: alternatives were based on nuclear energy, renewable indigenous energy sources, and the present energy situation with free access to imported natural or synthetic fuels. An analysis was made within each scenario of the competitiveness of hydrogen on both the demand and the supply of the following sectors: chemical industry, steel industry, peak power production, residential and commercial heating, and transportation. Costs were calculated for the production, storage and transmission of hydrogen according to technically feasible methods and were compared to those of alternative energy carriers. Health, environmental and societal implications were also considered. The market penetration of hydrogen in each sector was estimated, and the required investment capital was shown to be less than 4% of the national gross investment sum.
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Improving Non-Destructive Concrete Strength Tests Using Support Vector Machines
Shih, Yi-Fan; Wang, Yu-Ren; Lin, Kuo-Liang; Chen, Chin-Wen
2015-01-01
Non-destructive testing (NDT) methods are important alternatives when destructive tests are not feasible to examine the in situ concrete properties without damaging the structure. The rebound hammer test and the ultrasonic pulse velocity test are two popular NDT methods to examine the properties of concrete. The rebound of the hammer depends on the hardness of the test specimen and ultrasonic pulse travelling speed is related to density, uniformity, and homogeneity of the specimen. Both of these two methods have been adopted to estimate the concrete compressive strength. Statistical analysis has been implemented to establish the relationship between hammer rebound values/ultrasonic pulse velocities and concrete compressive strength. However, the estimated results can be unreliable. As a result, this research proposes an Artificial Intelligence model using support vector machines (SVMs) for the estimation. Data from 95 cylinder concrete samples are collected to develop and validate the model. The results show that combined NDT methods (also known as SonReb method) yield better estimations than single NDT methods. The results also show that the SVMs model is more accurate than the statistical regression model. PMID:28793627
Zhang, Zhiyong; Yuan, Ke-Hai
2015-01-01
Cronbach’s coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald’s omega has been used as a popular alternative to alpha in the literature. Traditional estimation methods for alpha and omega often implicitly assume that data are complete and normally distributed. This study proposes robust procedures to estimate both alpha and omega as well as corresponding standard errors and confidence intervals from samples that may contain potential outlying observations and missing values. The influence of outlying observations and missing data on the estimates of alpha and omega is investigated through two simulation studies. Results show that the newly developed robust method yields substantially improved alpha and omega estimates as well as better coverage rates of confidence intervals than the conventional nonrobust method. An R package coefficientalpha is developed and demonstrated to obtain robust estimates of alpha and omega. PMID:29795870
A less field-intensive robust design for estimating demographic parameters with Mark-resight data
McClintock, B.T.; White, Gary C.
2009-01-01
The robust design has become popular among animal ecologists as a means for estimating population abundance and related demographic parameters with mark-recapture data. However, two drawbacks of traditional mark-recapture are financial cost and repeated disturbance to animals. Mark-resight methodology may in many circumstances be a less expensive and less invasive alternative to mark-recapture, but the models developed to date for these data have overwhelmingly concentrated only on the estimation of abundance. Here we introduce a mark-resight model analogous to that used in mark-recapture for the simultaneous estimation of abundance, apparent survival, and transition probabilities between observable and unobservable states. The model may be implemented using standard statistical computing software, but it has also been incorporated into the freeware package Program MARK. We illustrate the use of our model with mainland New Zealand Robin (Petroica australis) data collected to ascertain whether this methodology may be a reliable alternative for monitoring endangered populations of a closely related species inhabiting the Chatham Islands. We found this method to be a viable alternative to traditional mark-recapture when cost or disturbance to species is of particular concern in long-term population monitoring programs. ?? 2009 by the Ecological Society of America.
Robust estimation procedure in panel data model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shariff, Nurul Sima Mohamad; Hamzah, Nor Aishah
2014-06-19
The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependencemore » is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.« less
Nonparametric estimation and testing of fixed effects panel data models
Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi
2009-01-01
In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335
NASA Astrophysics Data System (ADS)
Asfahani, Jamal
2017-08-01
An alternative approach using nuclear neutron-porosity and electrical resistivity well logging of long (64 inch) and short (16 inch) normal techniques is proposed to estimate the porosity and the hydraulic conductivity ( K) of the basaltic aquifers in Southern Syria. This method is applied on the available logs of Kodana well in Southern Syria. It has been found that the obtained K value by applying this technique seems to be reasonable and comparable with the hydraulic conductivity value of 3.09 m/day obtained by the pumping test carried out at Kodana well. The proposed alternative well logging methodology seems as promising and could be practiced in the basaltic environments for the estimation of hydraulic conductivity parameter. However, more detailed researches are still required to make this proposed technique very performed in basaltic environments.
NASA Astrophysics Data System (ADS)
Khader, A. I.; Rosenberg, D. E.; McKee, M.
2013-05-01
Groundwater contaminated with nitrate poses a serious health risk to infants when this contaminated water is used for culinary purposes. To avoid this health risk, people need to know whether their culinary water is contaminated or not. Therefore, there is a need to design an effective groundwater monitoring network, acquire information on groundwater conditions, and use acquired information to inform management options. These actions require time, money, and effort. This paper presents a method to estimate the value of information (VOI) provided by a groundwater quality monitoring network located in an aquifer whose water poses a spatially heterogeneous and uncertain health risk. A decision tree model describes the structure of the decision alternatives facing the decision-maker and the expected outcomes from these alternatives. The alternatives include (i) ignore the health risk of nitrate-contaminated water, (ii) switch to alternative water sources such as bottled water, or (iii) implement a previously designed groundwater quality monitoring network that takes into account uncertainties in aquifer properties, contaminant transport processes, and climate (Khader, 2012). The VOI is estimated as the difference between the expected costs of implementing the monitoring network and the lowest-cost uninformed alternative. We illustrate the method for the Eocene Aquifer, West Bank, Palestine, where methemoglobinemia (blue baby syndrome) is the main health problem associated with the principal contaminant nitrate. The expected cost of each alternative is estimated as the weighted sum of the costs and probabilities (likelihoods) associated with the uncertain outcomes resulting from the alternative. Uncertain outcomes include actual nitrate concentrations in the aquifer, concentrations reported by the monitoring system, whether people abide by manager recommendations to use/not use aquifer water, and whether people get sick from drinking contaminated water. Outcome costs include healthcare for methemoglobinemia, purchase of bottled water, and installation and maintenance of the groundwater monitoring system. At current methemoglobinemia and bottled water costs of 150/person and 0.6/baby/day, the decision tree results show that the expected cost of establishing the proposed groundwater quality monitoring network exceeds the expected costs of the uninformed alternatives and there is no value to the information the monitoring system provides. However, the monitoring system will be preferred to ignoring the health risk or using alternative sources if the methemoglobinemia cost rises to 300/person or the bottled water cost increases to 2.3/baby/day. Similarly, the monitoring system has value if the system can more accurately report actual aquifer concentrations and the public more fully abides by manager recommendations to use/not use the aquifer. The system also has value if it will serve a larger population or if its installation costs can be reduced, for example using a smaller number of monitoring wells. The VOI analysis shows how monitoring system design, accuracy, installation and operating costs, public awareness of health risks, costs of alternatives, and demographics together affect the value of implementing a system to monitor groundwater quality.
NASA Astrophysics Data System (ADS)
Khader, A.; Rosenberg, D.; McKee, M.
2012-12-01
Nitrate pollution poses a health risk for infants whose freshwater drinking source is groundwater. This risk creates a need to design an effective groundwater monitoring network, acquire information on groundwater conditions, and use acquired information to inform management. These actions require time, money, and effort. This paper presents a method to estimate the value of information (VOI) provided by a groundwater quality monitoring network located in an aquifer whose water poses a spatially heterogeneous and uncertain health risk. A decision tree model describes the structure of the decision alternatives facing the decision maker and the expected outcomes from these alternatives. The alternatives include: (i) ignore the health risk of nitrate contaminated water, (ii) switch to alternative water sources such as bottled water, or (iii) implement a previously designed groundwater quality monitoring network that takes into account uncertainties in aquifer properties, pollution transport processes, and climate (Khader and McKee, 2012). The VOI is estimated as the difference between the expected costs of implementing the monitoring network and the lowest-cost uninformed alternative. We illustrate the method for the Eocene Aquifer, West Bank, Palestine where methemoglobinemia is the main health problem associated with the principal pollutant nitrate. The expected cost of each alternative is estimated as the weighted sum of the costs and probabilities (likelihoods) associated with the uncertain outcomes resulting from the alternative. Uncertain outcomes include actual nitrate concentrations in the aquifer, concentrations reported by the monitoring system, whether people abide by manager recommendations to use/not-use aquifer water, and whether people get sick from drinking contaminated water. Outcome costs include healthcare for methemoglobinemia, purchase of bottled water, and installation and maintenance of the groundwater monitoring system. At current methemoglobinemia and bottled water costs of 150 $/person and 0.6 $/baby/day, the decision tree results show that the expected cost of establishing the proposed groundwater quality monitoring network exceeds the expected costs of the uninformed alternatives and there is not value to the information the monitoring system provides. However, the monitoring system will be preferred to ignoring the health risk or using alternative sources if the methemoglobinemia cost rises to 300 $/person or the bottled water cost increases to 2.3 $/baby/day. Similarly, the monitoring system has value if the system can more accurately report actual aquifer concentrations and the public more fully abides by managers' recommendations to use/not use the aquifer. The system also has value if it will serve a larger population or if its installation costs can be reduced, for example using a smaller number of monitoring wells. The VOI analysis shows how monitoring system design, accuracy, installation and operating costs, public awareness of health risks, costs of alternatives, and demographics together affect the value of implementing a system to monitor groundwater quality.
Makeyev, Oleksandr; Besio, Walter G
2016-08-01
Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation has been demonstrated in a range of applications. In our recent work we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts using finite element method modeling. Obtained results suggest that increasing inter-ring distances electrode configurations may decrease the estimation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration the estimation error may be decreased more than two-fold while for the quadripolar configuration more than six-fold decrease is expected.
A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set
Peng, Yi; Zhang, Yong; Kou, Gang; Shi, Yong
2012-01-01
Determining the number of clusters in a data set is an essential yet difficult step in cluster analysis. Since this task involves more than one criterion, it can be modeled as a multiple criteria decision making (MCDM) problem. This paper proposes a multiple criteria decision making (MCDM)-based approach to estimate the number of clusters for a given data set. In this approach, MCDM methods consider different numbers of clusters as alternatives and the outputs of any clustering algorithm on validity measures as criteria. The proposed method is examined by an experimental study using three MCDM methods, the well-known clustering algorithm–k-means, ten relative measures, and fifteen public-domain UCI machine learning data sets. The results show that MCDM methods work fairly well in estimating the number of clusters in the data and outperform the ten relative measures considered in the study. PMID:22870181
Rochelle-Newall, E; Hulot, F D; Janeau, J L; Merroune, A
2014-01-01
Chromophoric dissolved organic matter (CDOM) fluorescence or absorption is often proposed as a rapid alternative to chemical methods for the estimation of bulk dissolved organic carbon (DOC) concentration in natural waters. However, the robustness of this method across a wide range of systems remains to be shown. We measured CDOM fluorescence and DOC concentration in four tropical freshwater and coastal environments (estuary and coastal, tropical shallow lakes, water from the freshwater lens of two small islands, and soil leachates). We found that although this method can provide an estimation of DOC concentration in sites with low variability in DOC and CDOM sources in systems where the variability of DOC and CDOM sources are high, this method should not be used as it will lead to errors in the estimation of the bulk DOC concentration.
Urschler, Martin; Grassegger, Sabine; Štern, Darko
2015-01-01
Age estimation of individuals is important in human biology and has various medical and forensic applications. Recent interest in MR-based methods aims to investigate alternatives for established methods involving ionising radiation. Automatic, software-based methods additionally promise improved estimation objectivity. To investigate how informative automatically selected image features are regarding their ability to discriminate age, by exploring a recently proposed software-based age estimation method for MR images of the left hand and wrist. One hundred and two MR datasets of left hand images are used to evaluate age estimation performance, consisting of bone and epiphyseal gap volume localisation, computation of one age regression model per bone mapping image features to age and fusion of individual bone age predictions to a final age estimate. Quantitative results of the software-based method show an age estimation performance with a mean absolute difference of 0.85 years (SD = 0.58 years) to chronological age, as determined by a cross-validation experiment. Qualitatively, it is demonstrated how feature selection works and which image features of skeletal maturation are automatically chosen to model the non-linear regression function. Feasibility of automatic age estimation based on MRI data is shown and selected image features are found to be informative for describing anatomical changes during physical maturation in male adolescents.
Kulesz, Paulina A.; Tian, Siva; Juranek, Jenifer; Fletcher, Jack M.; Francis, David J.
2015-01-01
Objective Weak structure-function relations for brain and behavior may stem from problems in estimating these relations in small clinical samples with frequently occurring outliers. In the current project, we focused on the utility of using alternative statistics to estimate these relations. Method Fifty-four children with spina bifida meningomyelocele performed attention tasks and received MRI of the brain. Using a bootstrap sampling process, the Pearson product moment correlation was compared with four robust correlations: the percentage bend correlation, the Winsorized correlation, the skipped correlation using the Donoho-Gasko median, and the skipped correlation using the minimum volume ellipsoid estimator Results All methods yielded similar estimates of the relations between measures of brain volume and attention performance. The similarity of estimates across correlation methods suggested that the weak structure-function relations previously found in many studies are not readily attributable to the presence of outlying observations and other factors that violate the assumptions behind the Pearson correlation. Conclusions Given the difficulty of assembling large samples for brain-behavior studies, estimating correlations using multiple, robust methods may enhance the statistical conclusion validity of studies yielding small, but often clinically significant, correlations. PMID:25495830
2008-06-17
dosimeters . .............................................................................................. 117 Figure 4-2. Flow chart illustrating...alanine, various sugars, quartz in rocks and sulfates, as EPR dosimeters [15]. Alternatively, radiation-induced EPR signals have been detected using...the medical response to radiological accidents, as a method for estimating radiation dose without the use of physical dosimeters and using exposed
Nonparametric methods for drought severity estimation at ungauged sites
NASA Astrophysics Data System (ADS)
Sadri, S.; Burn, D. H.
2012-12-01
The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.
Autonomous Sun-Direction Estimation Using Partially Underdetermined Coarse Sun Sensor Configurations
NASA Astrophysics Data System (ADS)
O'Keefe, Stephen A.
In recent years there has been a significant increase in interest in smaller satellites as lower cost alternatives to traditional satellites, particularly with the rise in popularity of the CubeSat. Due to stringent mass, size, and often budget constraints, these small satellites rely on making the most of inexpensive hardware components and sensors, such as coarse sun sensors (CSS) and magnetometers. More expensive high-accuracy sun sensors often combine multiple measurements, and use specialized electronics, to deterministically solve for the direction of the Sun. Alternatively, cosine-type CSS output a voltage relative to the input light and are attractive due to their very low cost, simplicity to manufacture, small size, and minimal power consumption. This research investigates using coarse sun sensors for performing robust attitude estimation in order to point a spacecraft at the Sun after deployment from a launch vehicle, or following a system fault. As an alternative to using a large number of sensors, this thesis explores sun-direction estimation techniques with low computational costs that function well with underdetermined sets of CSS. Single-point estimators are coupled with simultaneous nonlinear control to achieve sun-pointing within a small percentage of a single orbit despite the partially underdetermined nature of the sensor suite. Leveraging an extensive analysis of the sensor models involved, sequential filtering techniques are shown to be capable of estimating the sun-direction to within a few degrees, with no a priori attitude information and using only CSS, despite the significant noise and biases present in the system. Detailed numerical simulations are used to compare and contrast the performance of the five different estimation techniques, with and without rate gyro measurements, their sensitivity to rate gyro accuracy, and their computation time. One of the key concerns with reducing the number of CSS is sensor degradation and failure. In this thesis, a Modified Rodrigues Parameter based CSS calibration filter suitable for autonomous on-board operation is developed. The sensitivity of this method's accuracy to the available Earth albedo data is evaluated and compared to the required computational effort. The calibration filter is expanded to perform sensor fault detection, and promising results are shown for reduced resolution albedo models. All of the methods discussed provide alternative attitude, determination, and control system algorithms for small satellite missions looking to use inexpensive, small sensors due to size, power, or budget limitations.
NASA Astrophysics Data System (ADS)
McJannet, D. L.; Cook, F. J.; McGloin, R. P.; McGowan, H. A.; Burn, S.
2011-05-01
The use of scintillometers to determine sensible and latent heat flux is becoming increasingly common because of their ability to quantify convective fluxes over distances of hundreds of meters to several kilometers. The majority of investigations using scintillometry have focused on processes above land surfaces, but here we propose a new methodology for obtaining sensible and latent heat fluxes from a scintillometer deployed over open water. This methodology has been tested by comparison with eddy covariance measurements and through comparison with alternative scintillometer calculation approaches that are commonly used in the literature. The methodology is based on linearization of the Bowen ratio, which is a common assumption in models such as Penman's model and its derivatives. Comparison of latent heat flux estimates from the eddy covariance system and the scintillometer showed excellent agreement across a range of weather conditions and flux rates, giving a high level of confidence in scintillometry-derived latent heat fluxes. The proposed approach produced better estimates than other scintillometry calculation methods because of the reliance of alternative methods on measurements of water temperature or water body heat storage, which are both notoriously hard to quantify. The proposed methodology requires less instrumentation than alternative scintillometer calculation approaches, and the spatial scales of required measurements are arguably more compatible. In addition to scintillometer measurements of the structure parameter of the refractive index of air, the only measurements required are atmospheric pressure, air temperature, humidity, and wind speed at one height over the water body.
Network Model-Assisted Inference from Respondent-Driven Sampling Data
Gile, Krista J.; Handcock, Mark S.
2015-01-01
Summary Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population. PMID:26640328
Network Model-Assisted Inference from Respondent-Driven Sampling Data.
Gile, Krista J; Handcock, Mark S
2015-06-01
Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population.
Total variation approach for adaptive nonuniformity correction in focal-plane arrays.
Vera, Esteban; Meza, Pablo; Torres, Sergio
2011-01-15
In this Letter we propose an adaptive scene-based nonuniformity correction method for fixed-pattern noise removal in imaging arrays. It is based on the minimization of the total variation of the estimated irradiance, and the resulting function is optimized by an isotropic total variation approach making use of an alternating minimization strategy. The proposed method provides enhanced results when applied to a diverse set of real IR imagery, accurately estimating the nonunifomity parameters of each detector in the focal-plane array at a fast convergence rate, while also forming fewer ghosting artifacts.
Kernel and divergence techniques in high energy physics separations
NASA Astrophysics Data System (ADS)
Bouř, Petr; Kůs, Václav; Franc, Jiří
2017-10-01
Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.
A prevalence-based association test for case-control studies.
Ryckman, Kelli K; Jiang, Lan; Li, Chun; Bartlett, Jacquelaine; Haines, Jonathan L; Williams, Scott M
2008-11-01
Genetic association is often determined in case-control studies by the differential distribution of alleles or genotypes. Recent work has demonstrated that association can also be assessed by deviations from the expected distributions of alleles or genotypes. Specifically, multiple methods motivated by the principles of Hardy-Weinberg equilibrium (HWE) have been developed. However, these methods do not take into account many of the assumptions of HWE. Therefore, we have developed a prevalence-based association test (PRAT) as an alternative method for detecting association in case-control studies. This method, also motivated by the principles of HWE, uses an estimated population allele frequency to generate expected genotype frequencies instead of using the case and control frequencies separately. Our method often has greater power, under a wide variety of genetic models, to detect association than genotypic, allelic or Cochran-Armitage trend association tests. Therefore, we propose PRAT as a powerful alternative method of testing for association.
Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Farsani, Zahra Amini; Schmid, Volker J
2017-01-01
In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
An Analysis of the Navy’s Fiscal Year 2017 Shipbuilding Plan
2017-02-01
Navy would build a larger fleet of about 350 ships (see Table 5). Those three alternatives were chosen for illustrative purposes because variations ...3.2 billion. 2. For more on procedures for estimating and applying learning curves, see Matthew S. Goldberg and Anduin E. Touw, Statistical Methods...guidance from Matthew Goldberg (formerly of CBO) and David Mosher. Raymond Hall of CBO’s Budget Analysis Division produced the cost estimates with
Scale Matters: A Cost-Outcome Analysis of an m-Health Intervention in Malawi
Bancroft, Emily; Rajagopal, Sharanya; O'Toole, Maggie; Levin, Ann
2016-01-01
Abstract Background: The primary objectives of this study are to determine cost per user and cost per contact with users of a mobile health (m-health) intervention. The secondary objectives are to map costs to changes in maternal, newborn, and child health (MNCH) and to estimate costs of alternate implementation and usage scenarios. Materials and Methods: A base cost model, constructed from recurrent costs and selected capital costs, was used to estimate average cost per user and per contact of an m-health intervention. This model was mapped to statistically significant changes in MNCH intermediate outcomes to determine the cost of improvements in MNCH indicators. Sensitivity analyses were conducted to estimate costs in alternate scenarios. Results: The m-health intervention cost $29.33 per user and $4.33 per successful contact. The average cost for each user experiencing a change in an MNCH indicator ranged from $67 to $355. The sensitivity analyses showed that cost per user could be reduced by 48% if the service were to operate at full capacity. Conclusions: We believe that the intervention, operating at scale, has potential to be a cost-effective method for improving maternal and child health indicators. PMID:26348994
DOE Office of Scientific and Technical Information (OSTI.GOV)
Telfeyan, Katherine Christina; Ware, Stuart Douglas; Reimus, Paul William
Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%,more » and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.« less
NASA Astrophysics Data System (ADS)
Telfeyan, Katherine; Ware, S. Doug; Reimus, Paul W.; Birdsell, Kay H.
2018-02-01
Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.
Molitor, John
2012-03-01
Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
Assessing the Value of Volunteer Activity.
ERIC Educational Resources Information Center
Brown, Eleanor
1999-01-01
Looks at methods of converting estimates of volunteer time into dollar value of volunteered time. Suggests an alternative strategy that acknowledges the importance of taxes, the provision of volunteer-assisted services at less-than-market prices, and the value of experiences gained by the volunteer. (JOW)
Kernel PLS Estimation of Single-trial Event-related Potentials
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.
2004-01-01
Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.
Narukawa, Masaki; Nohara, Katsuhito
2018-04-01
This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.
Verdin, Andrew; Funk, Christopher C.; Rajagopalan, Balaji; Kleiber, William
2016-01-01
Robust estimates of precipitation in space and time are important for efficient natural resource management and for mitigating natural hazards. This is particularly true in regions with developing infrastructure and regions that are frequently exposed to extreme events. Gauge observations of rainfall are sparse but capture the precipitation process with high fidelity. Due to its high resolution and complete spatial coverage, satellite-derived rainfall data are an attractive alternative in data-sparse regions and are often used to support hydrometeorological early warning systems. Satellite-derived precipitation data, however, tend to underrepresent extreme precipitation events. Thus, it is often desirable to blend spatially extensive satellite-derived rainfall estimates with high-fidelity rain gauge observations to obtain more accurate precipitation estimates. In this research, we use two different methods, namely, ordinary kriging and κ-nearest neighbor local polynomials, to blend rain gauge observations with the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates in data-sparse Central America and Colombia. The utility of these methods in producing blended precipitation estimates at pentadal (five-day) and monthly time scales is demonstrated. We find that these blending methods significantly improve the satellite-derived estimates and are competitive in their ability to capture extreme precipitation.
Real-Time Radar-Based Tracking and State Estimation of Multiple Non-Conformant Aircraft
NASA Technical Reports Server (NTRS)
Cook, Brandon; Arnett, Timothy; Macmann, Owen; Kumar, Manish
2017-01-01
In this study, a novel solution for automated tracking of multiple unknown aircraft is proposed. Many current methods use transponders to self-report state information and augment track identification. While conformant aircraft typically report transponder information to alert surrounding aircraft of its state, vehicles may exist in the airspace that are non-compliant and need to be accurately tracked using alternative methods. In this study, a multi-agent tracking solution is presented that solely utilizes primary surveillance radar data to estimate aircraft state information. Main research challenges include state estimation, track management, data association, and establishing persistent track validity. In an effort to realize these challenges, techniques such as Maximum a Posteriori estimation, Kalman filtering, degree of membership data association, and Nearest Neighbor Spanning Tree clustering are implemented for this application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holden, Jacob; Wood, Eric W; Zhu, Lei
A data-driven technique for estimation of energy requirements for a proposed vehicle trip has been developed. Based on over 700,000 miles of driving data, the technique has been applied to generate a model that estimates trip energy requirements. The model uses a novel binning approach to categorize driving by road type, traffic conditions, and driving profile. The trip-level energy estimations can easily be aggregated to any higher-level transportation system network desired. The model has been tested and validated on the Austin, Texas, data set used to build this model. Ground-truth energy consumption for the data set was obtained from Futuremore » Automotive Systems Technology Simulator (FASTSim) vehicle simulation results. The energy estimation model has demonstrated 12.1 percent normalized total absolute error. The energy estimation from the model can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations, to reduce energy consumption. The model can also be used to determine more accurate energy consumption of regional or national transportation networks if trip origin and destinations are known. Additionally, this method allows the estimation tool to be tuned to a specific driver or vehicle type.« less
Pros, Cons, and Alternatives to Weight Based Cost Estimating
NASA Technical Reports Server (NTRS)
Joyner, Claude R.; Lauriem, Jonathan R.; Levack, Daniel H.; Zapata, Edgar
2011-01-01
Many cost estimating tools use weight as a major parameter in projecting the cost. This is often combined with modifying factors such as complexity, technical maturity of design, environment of operation, etc. to increase the fidelity of the estimate. For a set of conceptual designs, all meeting the same requirements, increased weight can be a major driver in increased cost. However, once a design is fixed, increased weight generally decreases cost, while decreased weight generally increases cost - and the relationship is not linear. Alternative approaches to estimating cost without using weight (except perhaps for materials costs) have been attempted to try to produce a tool usable throughout the design process - from concept studies through development. This paper will address the pros and cons of using weight based models for cost estimating, using liquid rocket engines as the example. It will then examine approaches that minimize the impct of weight based cost estimating. The Rocket Engine- Cost Model (RECM) is an attribute based model developed internally by Pratt & Whitney Rocketdyne for NASA. RECM will be presented primarily to show a successful method to use design and programmatic parameters instead of weight to estimate both design and development costs and production costs. An operations model developed by KSC, the Launch and Landing Effects Ground Operations model (LLEGO), will also be discussed.
Raman, E Prabhu; Lakkaraju, Sirish Kaushik; Denny, Rajiah Aldrin; MacKerell, Alexander D
2017-06-05
Accurate and rapid estimation of relative binding affinities of ligand-protein complexes is a requirement of computational methods for their effective use in rational ligand design. Of the approaches commonly used, free energy perturbation (FEP) methods are considered one of the most accurate, although they require significant computational resources. Accordingly, it is desirable to have alternative methods of similar accuracy but greater computational efficiency to facilitate ligand design. In the present study relative free energies of binding are estimated for one or two non-hydrogen atom changes in compounds targeting the proteins ACK1 and p38 MAP kinase using three methods. The methods include standard FEP, single-step free energy perturbation (SSFEP) and the site-identification by ligand competitive saturation (SILCS) ligand grid free energy (LGFE) approach. Results show the SSFEP and SILCS LGFE methods to be competitive with or better than the FEP results for the studied systems, with SILCS LGFE giving the best agreement with experimental results. This is supported by additional comparisons with published FEP data on p38 MAP kinase inhibitors. While both the SSFEP and SILCS LGFE approaches require a significant upfront computational investment, they offer a 1000-fold computational savings over FEP for calculating the relative affinities of ligand modifications once those pre-computations are complete. An illustrative example of the potential application of these methods in the context of screening large numbers of transformations is presented. Thus, the SSFEP and SILCS LGFE approaches represent viable alternatives for actively driving ligand design during drug discovery and development. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Al Hassan, Mohammad; Hark, Frank
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk, and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results, and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods, such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty, are rendered obsolete, since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods. This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper describes how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Adjusting for Health Status in Non-Linear Models of Health Care Disparities
Cook, Benjamin L.; McGuire, Thomas G.; Meara, Ellen; Zaslavsky, Alan M.
2009-01-01
This article compared conceptual and empirical strengths of alternative methods for estimating racial disparities using non-linear models of health care access. Three methods were presented (propensity score, rank and replace, and a combined method) that adjust for health status while allowing SES variables to mediate the relationship between race and access to care. Applying these methods to a nationally representative sample of blacks and non-Hispanic whites surveyed in the 2003 and 2004 Medical Expenditure Panel Surveys (MEPS), we assessed the concordance of each of these methods with the Institute of Medicine (IOM) definition of racial disparities, and empirically compared the methods' predicted disparity estimates, the variance of the estimates, and the sensitivity of the estimates to limitations of available data. The rank and replace and combined methods (but not the propensity score method) are concordant with the IOM definition of racial disparities in that each creates a comparison group with the appropriate marginal distributions of health status and SES variables. Predicted disparities and prediction variances were similar for the rank and replace and combined methods, but the rank and replace method was sensitive to limitations on SES information. For all methods, limiting health status information significantly reduced estimates of disparities compared to a more comprehensive dataset. We conclude that the two IOM-concordant methods were similar enough that either could be considered in disparity predictions. In datasets with limited SES information, the combined method is the better choice. PMID:20352070
Eckermann, Simon; Coory, Michael; Willan, Andrew R
2011-02-01
Economic analysis and assessment of net clinical benefit often requires estimation of absolute risk difference (ARD) for binary outcomes (e.g. survival, response, disease progression) given baseline epidemiological risk in a jurisdiction of interest and trial evidence of treatment effects. Typically, the assumption is made that relative treatment effects are constant across baseline risk, in which case relative risk (RR) or odds ratios (OR) could be applied to estimate ARD. The objective of this article is to establish whether such use of RR or OR allows consistent estimates of ARD. ARD is calculated from alternative framing of effects (e.g. mortality vs survival) applying standard methods for translating evidence with RR and OR. For RR, the RR is applied to baseline risk in the jurisdiction to estimate treatment risk; for OR, the baseline risk is converted to odds, the OR applied and the resulting treatment odds converted back to risk. ARD is shown to be consistently estimated with OR but changes with framing of effects using RR wherever there is a treatment effect and epidemiological risk differs from trial risk. Additionally, in indirect comparisons, ARD is shown to be consistently estimated with OR, while calculation with RR allows inconsistency, with alternative framing of effects in the direction, let alone the extent, of ARD. OR ensures consistent calculation of ARD in translating evidence from trial settings and across trials in direct and indirect comparisons, avoiding inconsistencies from RR with alternative outcome framing and associated biases. These findings are critical for consistently translating evidence to inform economic analysis and assessment of net clinical benefit, as translation of evidence is proposed precisely where the advantages of OR over RR arise.
Estimating dermal transfer from PCB-contaminated porous surfaces.
Slayton, T M; Valberg, P A; Wait, A D
1998-06-01
Health risks posed by dermal contact with PCB-contaminated porous surfaces have not been directly demonstrated and are difficult to estimate indirectly. Surface contamination by organic compounds is commonly assessed by collecting wipe samples with hexane as the solvent. However, for porous surfaces, hexane wipe characterization is of limited direct use when estimating potential human exposure. Particularly for porous surfaces, the relationship between the amount of organic material collected by hexane and the amount actually picked up by, for example, a person's hand touch is unknown. To better mimic PCB pickup by casual hand contact with contaminated concrete surfaces, we used alternate solvents and wipe application methods that more closely mimic casual dermal contact. Our sampling results were compared to PCB pickup using hexane-wetted wipes and the standard rubbing protocol. Dry and oil-wetted samples, applied without rubbing, picked up less than 1% of the PCBs picked up by the standard hexane procedure; with rubbing, they picked up about 2%. Without rubbing, saline-wetted wipes picked up 2.5%; with rubbing, they picked up about 12%. While the nature of dermal contact with a contaminated surface cannot be perfectly reproduced with a wipe sample, our results with alternate wiping solvents and rubbing methods more closely mimic hand contact than the standard hexane wipe protocol. The relative pickup estimates presented in this paper can be used in conjunction with site-specific PCB hexane wipe results to estimate dermal pickup rates at sites with PCB-contaminated concrete.
Statistical primer: propensity score matching and its alternatives.
Benedetto, Umberto; Head, Stuart J; Angelini, Gianni D; Blackstone, Eugene H
2018-06-01
Propensity score (PS) methods offer certain advantages over more traditional regression methods to control for confounding by indication in observational studies. Although multivariable regression models adjust for confounders by modelling the relationship between covariates and outcome, the PS methods estimate the treatment effect by modelling the relationship between confounders and treatment assignment. Therefore, methods based on the PS are not limited by the number of events, and their use may be warranted when the number of confounders is large, or the number of outcomes is small. The PS is the probability for a subject to receive a treatment conditional on a set of baseline characteristics (confounders). The PS is commonly estimated using logistic regression, and it is used to match patients with similar distribution of confounders so that difference in outcomes gives unbiased estimate of treatment effect. This review summarizes basic concepts of the PS matching and provides guidance in implementing matching and other methods based on the PS, such as stratification, weighting and covariate adjustment.
A Method for Estimating Zero-Flow Pressure and Intracranial Pressure
Caren, Marzban; Paul, Raymond Illian; David, Morison; Anne, Moore; Michel, Kliot; Marek, Czosnyka; Pierre, Mourad
2012-01-01
Background It has been hypothesized that critical closing pressure of cerebral circulation, or zero-flow pressure (ZFP), can estimate intracranial pressure (ICP). One ZFP estimation method employs extrapolation of arterial blood pressure versus blood-flow velocity. The aim of this study is to improve ICP predictions. Methods Two revisions are considered: 1) The linear model employed for extrapolation is extended to a nonlinear equation, and 2) the parameters of the model are estimated by an alternative criterion (not least-squares). The method is applied to data on transcranial Doppler measurements of blood-flow velocity, arterial blood pressure, and ICP, from 104 patients suffering from closed traumatic brain injury, sampled across the United States and England. Results The revisions lead to qualitative (e.g., precluding negative ICP) and quantitative improvements in ICP prediction. In going from the original to the revised method, the ±2 standard deviation of error is reduced from 33 to 24 mm Hg; the root-mean-squared error (RMSE) is reduced from 11 to 8.2 mm Hg. The distribution of RMSE is tighter as well; for the revised method the 25th and 75th percentiles are 4.1 and 13.7 mm Hg, respectively, as compared to 5.1 and 18.8 mm Hg for the original method. Conclusions Proposed alterations to a procedure for estimating ZFP lead to more accurate and more precise estimates of ICP, thereby offering improved means of estimating it noninvasively. The quality of the estimates is inadequate for many applications, but further work is proposed which may lead to clinically useful results. PMID:22824923
Retooling Predictive Relations for non-volatile PM by Comparison to Measurements
NASA Astrophysics Data System (ADS)
Vander Wal, R. L.; Abrahamson, J. P.
2015-12-01
Non-volatile particulate matter (nvPM) emissions from jet aircraft at cruise altitude are of particular interest for climate and atmospheric processes but are difficult to measure and are normally approximated. To provide such inventory estimates the present approach is to use measured, ground-based values with scaling to cruise (engine operating) conditions. Several points are raised by this approach. First is what ground based values to use. Empirical and semi-empirical approaches, such as the revised first order approximation (FOA3) and formation-oxidation (FOX) methods, each with embedded assumptions are available to calculate a ground-based black carbon concentration, CBC. Second is the scaling relation that can depend upon the ratios of fuel-air equivalence, pressure, and combustor flame temperature. We are using measured ground-based values to evaluate the accuracy of present methods towards developing alternative methods for CBCby smoke number or via a semi-empirical kinetic method for the specific engine, CFM56-2C, representative of a rich-dome style combustor, and as one of the most prevalent engine families in commercial use. Applying scaling relations to measured ground based values and comparison to measurements at cruise evaluates the accuracy of current scaling formalism. In partnership with GE Aviation, performing engine cycle deck calculations enables critical comparison between estimated or predicted thermodynamic parameters and true (engine) operational values for the CFM56-2C engine. Such specific comparisons allow tracing differences between predictive estimates for, and measurements of nvPM to their origin - as either divergence of input parameters or in the functional form of the predictive relations. Such insights will lead to development of new predictive tools for jet aircraft nvPM emissions. Such validated relations can then be extended to alternative fuels with confidence in operational thermodynamic values and functional form. Comparisons will then be made between these new predictive relationships and measurements of nvPM from alternative fuels using ground and cruise data - as collected during NASA-led AAFEX and ACCESS field campaigns, respectively.
NASA Technical Reports Server (NTRS)
Banerjee, S. K.
1984-01-01
It is impossible to carry out conventional paleointensity experiments requiring repeated heating and cooling to 770 C without chemical, physical or microstructural changes on lunar samples. Non-thermal methods of paleointensity determination have been sought: the two anhysteretic remanent magnetization (ARM) methods, and the saturation isothermal remanent magnetization (IRMS) method. Experimental errors inherent in these alternative approaches have been investigated to estimate the accuracy limits on the calculated paleointensities. Results are indicated in this report.
Mark-recapture using tetracycline and genetics reveal record-high bear density
Peacock, E.; Titus, K.; Garshelis, D.L.; Peacock, M.M.; Kuc, M.
2011-01-01
We used tetracycline biomarking, augmented with genetic methods to estimate the size of an American black bear (Ursus americanus) population on an island in Southeast Alaska. We marked 132 and 189 bears that consumed remote, tetracycline-laced baits in 2 different years, respectively, and observed 39 marks in 692 bone samples subsequently collected from hunters. We genetically analyzed hair samples from bait sites to determine the sex of marked bears, facilitating derivation of sex-specific population estimates. We obtained harvest samples from beyond the study area to correct for emigration. We estimated a density of 155 independent bears/100 km2, which is equivalent to the highest recorded for this species. This high density appears to be maintained by abundant, accessible natural food. Our population estimate (approx. 1,000 bears) could be used as a baseline and to set hunting quotas. The refined biomarking method for abundance estimation is a useful alternative where physical captures or DNA-based estimates are precluded by cost or logistics. Copyright ?? 2011 The Wildlife Society.
Noninvasive estimation of assist pressure for direct mechanical ventricular actuation
NASA Astrophysics Data System (ADS)
An, Dawei; Yang, Ming; Gu, Xiaotong; Meng, Fan; Yang, Tianyue; Lin, Shujing
2018-02-01
Direct mechanical ventricular actuation is effective to reestablish the ventricular function with non-blood contact. Due to the energy loss within the driveline of the direct cardiac compression device, it is necessary to acquire the accurate value of assist pressure acting on the heart surface. To avoid myocardial trauma induced by invasive sensors, the noninvasive estimation method is developed and the experimental device is designed to measure the sample data for fitting the estimation models. By examining the goodness of fit numerically and graphically, the polynomial model presents the best behavior among the four alternative models. Meanwhile, to verify the effect of the noninvasive estimation, the simplified lumped parameter model is utilized to calculate the pre-support and the post-support left ventricular pressure. Furthermore, by adjusting the driving pressure beyond the range of the sample data, the assist pressure is estimated with the similar waveform and the post-support left ventricular pressure approaches the value of the adult healthy heart, indicating the good generalization ability of the noninvasive estimation method.
Sean P. Healey; Paul L. Patterson; Sassan Saatchi; Michael A. Lefsky; Andrew J. Lister; Elizabeth A. Freeman; Gretchen G. Moisen
2012-01-01
Light Detection and Ranging (LiDAR) returns from the spaceborne Geoscience Laser Altimeter (GLAS) sensor may offer an alternative to solely field-based forest biomass sampling. Such an approach would rely upon model-based inference, which can account for the uncertainty associated with using modeled, instead of field-collected, measurements. Model-based methods have...
An analysis of possible applications of fuzzy set theory to the actuarial credibility theory
NASA Technical Reports Server (NTRS)
Ostaszewski, Krzysztof; Karwowski, Waldemar
1992-01-01
In this work, we review the basic concepts of actuarial credibility theory from the point of view of introducing applications of the fuzzy set-theoretic method. We show how the concept of actuarial credibility can be modeled through the fuzzy set membership functions and how fuzzy set methods, especially fuzzy pattern recognition, can provide an alternative tool for estimating credibility.
ERIC Educational Resources Information Center
Aguado, Jaume; Campbell, Alistair; Ascaso, Carlos; Navarro, Purificacion; Garcia-Esteve, Lluisa; Luciano, Juan V.
2012-01-01
In this study, the authors tested alternative factor models of the 12-item General Health Questionnaire (GHQ-12) in a sample of Spanish postpartum women, using confirmatory factor analysis. The authors report the results of modeling three different methods for scoring the GHQ-12 using estimation methods recommended for categorical and binary data.…
Alternative models in developmental toxicology.
Lee, Hyung-yul; Inselman, Amy L; Kanungo, Jyotshnabala; Hansen, Deborah K
2012-02-01
In light of various pressures, toxicologists have been searching for alternative methods for safety testing of chemicals. According to a recent policy in the European Union (Regulation, Evaluation Authorisation and Restriction of Chemicals, REACH), it has been estimated that over the next twelve to fifteen years, approximately 30,000 chemicals may need to be tested for safety, and under current guidelines such testing would require the use of approximately 7.2 million laboratory animals [ Hofer et al. 2004 ]. It has also been estimated that over 80% of all animals used for safety testing under REACH legislation would be used for examining reproductive and developmental toxicity [Hofer et al., 2004]. In addition to REACH initiatives, it has been estimated that out of 5,000 to 10,000 new drug entities that a pharmaceutical company may start with, only one is finally approved by the Food and Drug Administration at a cost of over one billion dollars [ Garg et al. 2011 ]. A large portion of this cost is due to animal testing. Therefore, both the pharmaceutical and chemical industries are interested in using alternative models and in vitro tests for safety testing. This review will examine the current state of three alternative models - whole embryo culture (WEC), the mouse embryonic stem cell test (mEST), and zebrafish. Each of these alternatives will be reviewed, and advantages and disadvantages of each model will be discussed. These models were chosen because they are the models most commonly used and would appear to have the greatest potential for future applications in developmental toxicity screening and testing.
NASA Astrophysics Data System (ADS)
Yin, Shui-qing; Wang, Zhonglei; Zhu, Zhengyuan; Zou, Xu-kai; Wang, Wen-ting
2018-07-01
Extreme precipitation can cause flooding and may result in great economic losses and deaths. The return level is a commonly used measure of extreme precipitation events and is required for hydrological engineer designs, including those of sewerage systems, dams, reservoirs and bridges. In this paper, we propose a two-step method to estimate the return level and its uncertainty for a study region. In the first step, we use the generalized extreme value distribution, the L-moment method and the stationary bootstrap to estimate the return level and its uncertainty at the site with observations. In the second step, a spatial model incorporating the heterogeneous measurement errors and covariates is trained to estimate return levels at sites with no observations and to improve the estimates at sites with limited information. The proposed method is applied to the daily rainfall data from 273 weather stations in the Haihe river basin of North China. We compare the proposed method with two alternatives: the first one is based on the ordinary Kriging method without measurement error, and the second one smooths the estimated location and scale parameters of the generalized extreme value distribution by the universal Kriging method. Results show that the proposed method outperforms its counterparts. We also propose a novel approach to assess the two-step method by comparing it with the at-site estimation method with a series of reduced length of observations. Estimates of the 2-, 5-, 10-, 20-, 50- and 100-year return level maps and the corresponding uncertainties are provided for the Haihe river basin, and a comparison with those released by the Hydrology Bureau of Ministry of Water Resources of China is made.
In order to predict the margin between the dose needed for adverse chemical effects and actual human exposure rates, data on hazard, exposure, and toxicokinetics are needed. In vitro methods, biomonitoring, and mathematical modeling have provided initial estimates for many extant...
Bion, Julian
2013-12-19
Barriers to the use of selective digestive decontamination include concerns about emergence of resistant organisms, over-estimation of current performance in preventing ventilator-associated pneumonia (VAP), alternative methods of preventing VAP, and misunderstanding of mechanisms of action. A definitive cluster-randomised trial should be undertaken that incorporates practitioner concerns and effect-size preferences.
Exposure studies rely on detailed characterization of air quality, either from sparsely located routine ambient monitors or from central monitoring sites that may lack spatial representativeness. Alternatively, some studies use models of various complexities to characterize local...
Community duplicate diet methodology: A new tool for estimating dietary exposure to pesticides
An observational field study was conducted to assess the feasibility of a community duplicate diet collection method; a dietary monitoring procedure that is population-based. The purpose was to establish an alternative procedure to duplicate diet sampling that would be more effi...
Lina, Ioan A; Lauer, Amanda M
2013-04-01
The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.
Statistical Cost Estimation in Higher Education: Some Alternatives.
ERIC Educational Resources Information Center
Brinkman, Paul T.; Niwa, Shelley
Recent developments in econometrics that are relevant to the task of estimating costs in higher education are reviewed. The relative effectiveness of alternative statistical procedures for estimating costs are also tested. Statistical cost estimation involves three basic parts: a model, a data set, and an estimation procedure. Actual data are used…
Multiple Illuminant Colour Estimation via Statistical Inference on Factor Graphs.
Mutimbu, Lawrence; Robles-Kelly, Antonio
2016-08-31
This paper presents a method to recover a spatially varying illuminant colour estimate from scenes lit by multiple light sources. Starting with the image formation process, we formulate the illuminant recovery problem in a statistically datadriven setting. To do this, we use a factor graph defined across the scale space of the input image. In the graph, we utilise a set of illuminant prototypes computed using a data driven approach. As a result, our method delivers a pixelwise illuminant colour estimate being devoid of libraries or user input. The use of a factor graph also allows for the illuminant estimates to be recovered making use of a maximum a posteriori (MAP) inference process. Moreover, we compute the probability marginals by performing a Delaunay triangulation on our factor graph. We illustrate the utility of our method for pixelwise illuminant colour recovery on widely available datasets and compare against a number of alternatives. We also show sample colour correction results on real-world images.
Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane
2017-07-12
The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less
Regalado, Carlos M; Ritter, Axel
2007-08-01
Calibration of the Granier thermal dissipation technique for measuring stem sap flow in trees requires determination of the temperature difference (DeltaT) between a heated and an unheated probe when sap flow is zero (DeltaT(max)). Classically, DeltaT(max) has been estimated from the maximum predawn DeltaT, assuming that sap flow is negligible at nighttime. However, because sap flow may continue during the night, the maximum predawn DeltaT value may underestimate the true DeltaT(max). No alternative method has yet been proposed to estimate DeltaT(max) when sap flow is non-zero at night. A sensitivity analysis is presented showing that errors in DeltaT(max) may amplify through sap flux density computations in Granier's approach, such that small amounts of undetected nighttime sap flow may lead to large diurnal sap flux density errors, hence the need for a correct estimate of DeltaT(max). By rearranging Granier's original formula, an optimization method to compute DeltaT(max) from simultaneous measurements of diurnal DeltaT and micrometeorological variables, without assuming that sap flow is negligible at night, is presented. Some illustrative examples are shown for sap flow measurements carried out on individuals of Erica arborea L., which has needle-like leaves, and Myrica faya Ait., a broadleaf species. We show that, although DeltaT(max) values obtained by the proposed method may be similar in some instances to the DeltaT(max) predicted at night, in general the values differ. The procedure presented has the potential of being applied not only to Granier's method, but to other heat-based sap flow systems that require a zero flow calibration, such as the Cermák et al. (1973) heat balance method and the T-max heat pulse system of Green et al. (2003).
Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes
Li, Degui; Li, Runze
2016-01-01
In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894
2014-01-01
Background Leptotrombidium pallidum and Leptotrombidium scutellare are the major vector mites for Orientia tsutsugamushi, the causative agent of scrub typhus. Before these organisms can be subjected to whole-genome sequencing, it is necessary to estimate their genome sizes to obtain basic information for establishing the strategies that should be used for genome sequencing and assembly. Method The genome sizes of L. pallidum and L. scutellare were estimated by a method based on quantitative real-time PCR. In addition, a k-mer analysis of the whole-genome sequences obtained through Illumina sequencing was conducted to verify the mutual compatibility and reliability of the results. Results The genome sizes estimated using qPCR were 191 ± 7 Mb for L. pallidum and 262 ± 13 Mb for L. scutellare. The k-mer analysis-based genome lengths were estimated to be 175 Mb for L. pallidum and 286 Mb for L. scutellare. The estimates from these two independent methods were mutually complementary and within a similar range to those of other Acariform mites. Conclusions The estimation method based on qPCR appears to be a useful alternative when the standard methods, such as flow cytometry, are impractical. The relatively small estimated genome sizes should facilitate whole-genome analysis, which could contribute to our understanding of Arachnida genome evolution and provide key information for scrub typhus prevention and mite vector competence. PMID:24947244
Evaluation of the site effect with Heuristic Methods
NASA Astrophysics Data System (ADS)
Torres, N. N.; Ortiz-Aleman, C.
2017-12-01
The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.
A simulation study on Bayesian Ridge regression models for several collinearity levels
NASA Astrophysics Data System (ADS)
Efendi, Achmad; Effrihan
2017-12-01
When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.
Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen
2014-01-01
Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.
Estimating 1 min rain rate distributions from numerical weather prediction
NASA Astrophysics Data System (ADS)
Paulson, Kevin S.
2017-01-01
Internationally recognized prognostic models of rain fade on terrestrial and Earth-space EHF links rely fundamentally on distributions of 1 min rain rates. Currently, in Rec. ITU-R P.837-6, these distributions are generated using the Salonen-Poiares Baptista method where 1 min rain rate distributions are estimated from long-term average annual accumulations provided by numerical weather prediction (NWP). This paper investigates an alternative to this method based on the distribution of 6 h accumulations available from the same NWPs. Rain rate fields covering the UK, produced by the Nimrod network of radars, are integrated to estimate the accumulations provided by NWP, and these are linked to distributions of fine-scale rain rates. The proposed method makes better use of the available data. It is verified on 15 NWP regions spanning the UK, and the extension to other regions is discussed.
Daniell method for power spectral density estimation in atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labuda, Aleksander
An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to amore » more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.« less
An evaluation of a bioelectrical impedance analyser for the estimation of body fat content.
Maughan, R J
1993-01-01
Measurement of body composition is an important part of any assessment of health or fitness. Hydrostatic weighing is generally accepted as the most reliable method for the measurement of body fat content, but is inconvenient. Electrical impedance analysers have recently been proposed as an alternative to the measurement of skinfold thickness. Both these latter methods are convenient, but give values based on estimates obtained from population studies. This study compared values of body fat content obtained by hydrostatic weighing, skinfold thickness measurement and electrical impedance on 50 (28 women, 22 men) healthy volunteers. Mean(s.e.m.) values obtained by the three methods were: hydrostatic weighing, 20.5(1.2)%; skinfold thickness, 21.8(1.0)%; impedance, 20.8(0.9)%. The results indicate that the correlation between the skinfold method and hydrostatic weighing (0.931) is somewhat higher than that between the impedance method and hydrostatic weighing (0.830). This is, perhaps, not surprising given the fact that the impedance method is based on an estimate of total body water which is then used to calculate body fat content. The skinfold method gives an estimate of body density, and the assumptions involved in the conversion from body density to body fat content are the same for both methods. PMID:8457817
Telfeyan, Katherine Christina; Ware, Stuart Doug; Reimus, Paul William; ...
2018-01-31
Here, diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged frommore » 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Telfeyan, Katherine Christina; Ware, Stuart Doug; Reimus, Paul William
Here, diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged frommore » 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.« less
Decomposing University Grades: A Longitudinal Study of Students and Their Instructors
ERIC Educational Resources Information Center
Beenstock, Michael; Feldman, Dan
2018-01-01
First-degree course grades for a cohort of social science students are matched to their instructors, and are statistically decomposed into departmental, course, instructor, and student components. Student ability is measured alternatively by university acceptance scores, or by fixed effects estimated using panel data methods. After controlling for…
DOT National Transportation Integrated Search
2006-12-01
Transportation agencies seem to be paying more and more for less and less. Project : costs are outpacing budget estimates in many areas, while growth in demand continues : to strain available capacity. Right of way costs in particular are consuming a...
A Shortcut to Estimating Economic Impact.
ERIC Educational Resources Information Center
Ryan, G. Jeremiah
1985-01-01
Describes a project which developed an alternative model for determining the economic impact of community colleges in New Jersey. Explains methods used to substitute for student and staff surveys, and the retail gravity model. Includes the instrument used to determine the individual college and statewide impacts and a bibliography. (AYC)
An Unbiased Estimate of Global Interrater Agreement
ERIC Educational Resources Information Center
Cousineau, Denis; Laurencelle, Louis
2017-01-01
Assessing global interrater agreement is difficult as most published indices are affected by the presence of mixtures of agreements and disagreements. A previously proposed method was shown to be specifically sensitive to global agreement, excluding mixtures, but also negatively biased. Here, we propose two alternatives in an attempt to find what…
Three-body radiative capture reactions
NASA Astrophysics Data System (ADS)
Casal, J.; Rodríguez-Gallardo, M.; Arias, J. M.; Gómez-Camacho, J.
2018-01-01
Radiative capture reaction rates for 6He, 9Be and 17Ne formation at astrophysical conditions are studied within a three-body model using the analytical transformed harmonic oscillator method to calculate their states. An alternative procedure to estimate these rates from experimental data on low-energy breakup is also discussed.
Estimates of the Sampling Distribution of Scalability Coefficient H
ERIC Educational Resources Information Center
Van Onna, Marieke J. H.
2004-01-01
Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…
2016-01-01
Two foundational methods for estimating the total economic burden of disease are cost of illness (COI) and willingness to pay (WTP). WTP measures the full cost to society, but WTP estimates are difficult to compute and rarely available. COI methods are more often used but less likely to reflect full costs. This paper attempts to estimate the full economic cost (2014$) of illnesses resulting from exposure to dampness and mold using COI methods and WTP where the data is available. A limited sensitivity analysis of alternative methods and assumptions demonstrates a wide potential range of estimates. In the final estimates, the total annual cost to society attributable to dampness and mold is estimated to be $3.7 (2.3–4.7) billion for allergic rhinitis, $1.9 (1.1–2.3) billion for acute bronchitis, $15.1 (9.4–20.6) billion for asthma morbidity, and $1.7 (0.4–4.5) billion for asthma mortality. The corresponding costs from all causes, not limited to dampness and mold, using the same approach would be $24.8 billion for allergic rhinitis, $13.5 billion for acute bronchitis, $94.5 billion for asthma morbidity, and $10.8 billion for asthma mortality. PMID:27313630
Cassini, Rudi; Scremin, Mara; Contiero, Barbara; Drago, Andrea; Vettorato, Christian; Marcer, Federica; di Regalbono, Antonio Frangipane
2016-06-01
Ambient insecticides are receiving increasing attention in many developed countries because of their value in reducing mosquito nuisance. As required by the European Union Biocidal Products Regulation 528/2012, these devices require appropriate testing of their efficacy, which is based on estimating the knockdown and mortality rates of free-flying (free) mosquitoes in a test room. However, evaluations using free mosquitoes present many complexities. The performances of 6 alternative methods with mosquitoes held in 2 different cage designs (steel wire and gauze/plastic) with and without an operating fan for air circulation were monitored in a test room through a closed-circuit television system and were compared with the currently recommended method using free mosquitoes. Results for caged mosquitoes without a fan showed a clearly delayed knockdown effect, whereas outcomes for caged mosquitoes with a fan recorded higher mortality at 24 h, compared to free mosquitoes. Among the 6 methods, cages made of gauze and plastic operating with fan wind speed at 2.5-2.8 m/sec was the only method without a significant difference in results for free mosquitoes, and therefore appears as the best alternative to assess knockdown by ambient insecticides accurately.
ENGINEERING ECONOMIC ANALYSIS OF A PROGRAM FOR ARTIFICIAL GROUNDWATER RECHARGE.
Reichard, Eric G.; Bredehoeft, John D.
1984-01-01
This study describes and demonstrates two alternate methods for evaluating the relative costs and benefits of artificial groundwater recharge using percolation ponds. The first analysis considers the benefits to be the reduction of pumping lifts and land subsidence; the second considers benefits as the alternative costs of a comparable surface delivery system. Example computations are carried out for an existing artificial recharge program in Santa Clara Valley in California. A computer groundwater model is used to estimate both the average long term and the drought period effects of artificial recharge in the study area. Results indicate that the costs of artificial recharge are considerably smaller than the alternative costs of an equivalent surface system. Refs.
An activity-based methodology for operations cost analysis
NASA Technical Reports Server (NTRS)
Korsmeyer, David; Bilby, Curt; Frizzell, R. A.
1991-01-01
This report describes an activity-based cost estimation method, proposed for the Space Exploration Initiative (SEI), as an alternative to NASA's traditional mass-based cost estimation method. A case study demonstrates how the activity-based cost estimation technique can be used to identify the operations that have a significant impact on costs over the life cycle of the SEI. The case study yielded an operations cost of $101 billion for the 20-year span of the lunar surface operations for the Option 5a program architecture. In addition, the results indicated that the support and training costs for the missions were the greatest contributors to the annual cost estimates. A cost-sensitivity analysis of the cultural and architectural drivers determined that the length of training and the amount of support associated with the ground support personnel for mission activities are the most significant cost contributors.
Baseline estimation from simultaneous satellite laser tracking
NASA Technical Reports Server (NTRS)
Dedes, George C.
1987-01-01
Simultaneous Range Differences (SRDs) to Lageos are obtained by dividing the observing stations into pairs with quasi-simultaneous observations. For each of those pairs the station with the least number of observations is identified, and at its observing epochs interpolated ranges for the alternate station are generated. The SRD observables are obtained by subtracting the actually observed laser range of the station having the least number of observations from the interpolated ranges of the alternate station. On the basis of these observables semidynamic single baseline solutions were performed. The aim of these solutions is to further develop and implement the SRD method in the real data environment, to assess its accuracy, its advantages and disadvantages as related to the range dynamic mode methods, when the baselines are the only parameters of interest. Baselines, using simultaneous laser range observations to Lageos, were also estimated through the purely geometric method. These baselines formed the standards the standards of comparison in the accuracy assessment of the SRD method when compared to that of the range dynamic mode methods. On the basis of this comparison it was concluded that for baselines of regional extent the SRD method is very effective, efficient, and at least as accurate as the range dynamic mode methods, and that on the basis of a simple orbital modeling and a limited orbit adjustment. The SRD method is insensitive to the inconsistencies affecting the terrestrial reference frame and simultaneous adjustment of the Earth Rotation Parameters (ERPs) is not necessary.
Trébucq, A; Guérin, N; Ali Ismael, H; Bernatas, J J; Sèvre, J P; Rieder, H L
2005-10-01
Djibouti, 1994 and 2001. To estimate the prevalence of tuberculosis (TB) and average annual risk of TB infection (ARTI) and trends, and to test a new method for calculations. Tuberculin surveys among schoolchildren and sputum smear-positive TB patients. Prevalence of infection was calculated using cut-off points, the mirror image technique, mixture analysis, and a new method based on the operating characteristics of the tuberculin test. Test sensitivity was derived from tuberculin reactions among TB patients and test specificity from a comparison of reaction size distributions among children with and without a BCG scar. The ARTI was estimated to lie between 2.6% and 3.1%, with no significant changes between 1994 and 2001. The close match of the distributions between children tested in 1994 and patients justifies the utilisation of the latter to determine test sensitivity. This new method gave very consistent estimates of prevalence of infection for any induration for values between 15 and 20 mm. Specificity was successfully determined for 1994, but not for 2001. Mixture analysis confirmed the estimates obtained with the new method. Djibouti has a high ARTI, and no apparent change over the observation time was found. Using operating test characteristics to estimate prevalence of infection looks promising.
Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo
2016-11-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.
Getting It Right Matters: Climate Spectra and Their Estimation
NASA Astrophysics Data System (ADS)
Privalsky, Victor; Yushkov, Vladislav
2018-06-01
In many recent publications, climate spectra estimated with different methods from observed, GCM-simulated, and reconstructed time series contain many peaks at time scales from a few years to many decades and even centuries. However, respective spectral estimates obtained with the autoregressive (AR) and multitapering (MTM) methods showed that spectra of climate time series are smooth and contain no evidence of periodic or quasi-periodic behavior. Four order selection criteria for the autoregressive models were studied and proven sufficiently reliable for 25 time series of climate observations at individual locations or spatially averaged at local-to-global scales. As time series of climate observations are short, an alternative reliable nonparametric approach is Thomson's MTM. These results agree with both the earlier climate spectral analyses and the Markovian stochastic model of climate.
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
Kulesz, Paulina A; Tian, Siva; Juranek, Jenifer; Fletcher, Jack M; Francis, David J
2015-03-01
Weak structure-function relations for brain and behavior may stem from problems in estimating these relations in small clinical samples with frequently occurring outliers. In the current project, we focused on the utility of using alternative statistics to estimate these relations. Fifty-four children with spina bifida meningomyelocele performed attention tasks and received MRI of the brain. Using a bootstrap sampling process, the Pearson product-moment correlation was compared with 4 robust correlations: the percentage bend correlation, the Winsorized correlation, the skipped correlation using the Donoho-Gasko median, and the skipped correlation using the minimum volume ellipsoid estimator. All methods yielded similar estimates of the relations between measures of brain volume and attention performance. The similarity of estimates across correlation methods suggested that the weak structure-function relations previously found in many studies are not readily attributable to the presence of outlying observations and other factors that violate the assumptions behind the Pearson correlation. Given the difficulty of assembling large samples for brain-behavior studies, estimating correlations using multiple, robust methods may enhance the statistical conclusion validity of studies yielding small, but often clinically significant, correlations. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Alternative methods to evaluate trial level surrogacy.
Abrahantes, Josè Cortiñas; Shkedy, Ziv; Molenberghs, Geert
2008-01-01
The evaluation and validation of surrogate endpoints have been extensively studied in the last decade. Prentice [1] and Freedman, Graubard and Schatzkin [2] laid the foundations for the evaluation of surrogate endpoints in randomized clinical trials. Later, Buyse et al. [5] proposed a meta-analytic methodology, producing different methods for different settings, which was further studied by Alonso and Molenberghs [9], in their unifying approach based on information theory. In this article, we focus our attention on the trial-level surrogacy and propose alternative procedures to evaluate such surrogacy measure, which do not pre-specify the type of association. A promising correction based on cross-validation is investigated. As well as the construction of confidence intervals for this measure. In order to avoid making assumption about the type of relationship between the treatment effects and its distribution, a collection of alternative methods, based on regression trees, bagging, random forests, and support vector machines, combined with bootstrap-based confidence interval and, should one wish, in conjunction with a cross-validation based correction, will be proposed and applied. We apply the various strategies to data from three clinical studies: in opthalmology, in advanced colorectal cancer, and in schizophrenia. The results obtained for the three case studies are compared; they indicate that using random forest or bagging models produces larger estimated values for the surrogacy measure, which are in general stabler and the confidence interval narrower than linear regression and support vector regression. For the advanced colorectal cancer studies, we even found the trial-level surrogacy is considerably different from what has been reported. In general the alternative methods are more computationally demanding, and specially the calculation of the confidence intervals, require more computational time that the delta-method counterpart. First, more flexible modeling techniques can be used, allowing for other type of association. Second, when no cross-validation-based correction is applied, overly optimistic trial-level surrogacy estimates will be found, thus cross-validation is highly recommendable. Third, the use of the delta method to calculate confidence intervals is not recommendable since it makes assumptions valid only in very large samples. It may also produce range-violating limits. We therefore recommend alternatives: bootstrap methods in general. Also, the information-theoretic approach produces comparable results with the bagging and random forest approaches, when cross-validation correction is applied. It is also important to observe that, even for the case in which the linear model might be a good option too, bagging methods perform well too, and their confidence intervals were more narrow.
Jean-Christophe Domec; Ge Sun; Asko Noormets; Michael J. Gavazzi; Emrys A. Treasure; Erika Cohen; Jennifer J. Swenson; Steve G. McNulty; John S. King
2012-01-01
Increasing variability of rainfall patterns requires detailed understanding of the pathways of water loss from ecosystems to optimize carbon uptake and management choices. In the current study we characterized the usability of three alternative methods of different rigor for quantifying stand-level evapotranspiration (ET), partitioned ET into tree transpiration (T),...
Density estimation in wildlife surveys
Bart, Jonathan; Droege, Sam; Geissler, Paul E.; Peterjohn, Bruce G.; Ralph, C. John
2004-01-01
Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.
NASA Technical Reports Server (NTRS)
1980-01-01
Left ventricular stroke volume was estimated from the systolic velocity integral in the ascending aorta by pulsed Doppler Echocardiography (PDE) and the cross sectional area of the aorta estimated by M mode echocardiography on 15 patients with coronary disease undergoing right catheterization for diagnostic purposes. Cardiac output was calculated from stroke volume and heart volume using the PDE method as well as the Fick procedure for comparison. The mean value for the cardiac output via the PDE method (4.42 L/min) was only 6% lower than for the cardiac output obtained from the Fick procedure (4.69 L/min) and the correlation between the two methods was excellent (r=0.967, p less than .01). The good agreement between the two methods demonstrates that the PDE technique offers a reliable noninvasive alternative for estimating cardiac output, requiring no active cooperation by the subject. It was concluded that the Doppler method is superior to the Fick method in that it provides beat by beat information on cardiac performance.
Ammour, Y; Faizuloev, E; Borisova, T; Nikonova, A; Dmitriev, G; Lobodanov, S; Zverev, V
2013-01-01
In this study, a rapid quantitative method using TaqMan-based real-time reverse transcription-polymerase chain reaction (qPCR-RT) has been developed for estimating the titers of measles, mumps and rubella (MMR) viruses in infected cell culture supernatants. The qPCR-RT assay was demonstrated to be a specific, sensitive, efficient and reproducible method. For MMR viral samples obtained during MMR viral propagations in Vero cells at a different multiplicity of infection, titers determined by the qPCR-RT assay have been compared with estimates of infectious virus obtained by a traditional commonly used method for MMR viruses - 50% cell culture infective dose (CCID(50)) assay, in paired samples. Pearson analysis evidenced a significant correlation between both methods for a certain period after viral inoculation. Furthermore, the established qPCR-RT assay was faster and less-laborious. The developed method could be used as an alternative method or a supplementary tool for the routine titer estimation during MMR vaccine production. Copyright © 2012 Elsevier B.V. All rights reserved.
A novel application of artificial neural network for wind speed estimation
NASA Astrophysics Data System (ADS)
Fang, Da; Wang, Jianzhou
2017-05-01
Providing accurate multi-steps wind speed estimation models has increasing significance, because of the important technical and economic impacts of wind speed on power grid security and environment benefits. In this study, the combined strategies for wind speed forecasting are proposed based on an intelligent data processing system using artificial neural network (ANN). Generalized regression neural network and Elman neural network are employed to form two hybrid models. The approach employs one of ANN to model the samples achieving data denoising and assimilation and apply the other to predict wind speed using the pre-processed samples. The proposed method is demonstrated in terms of the predicting improvements of the hybrid models compared with single ANN and the typical forecasting method. To give sufficient cases for the study, four observation sites with monthly average wind speed of four given years in Western China were used to test the models. Multiple evaluation methods demonstrated that the proposed method provides a promising alternative technique in monthly average wind speed estimation.
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil
2014-08-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-01-17
A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.
Jin, Wen; Jiang, Hai; Liu, Yimin; Klampfl, Erica
2017-01-01
Discrete choice experiments have been widely applied to elicit behavioral preferences in the literature. In many of these experiments, the alternatives are named alternatives, meaning that they are naturally associated with specific names. For example, in a mode choice study, the alternatives can be associated with names such as car, taxi, bus, and subway. A fundamental issue that arises in stated choice experiments is whether to treat the alternatives' names as labels (that is, labeled treatment), or as attributes (that is, unlabeled treatment) in the design as well as the presentation phases of the choice sets. In this research, we investigate the impact of labeled versus unlabeled treatments of alternatives' names on the outcome of stated choice experiments, a question that has not been thoroughly investigated in the literature. Using results from a mode choice study, we find that the labeled or the unlabeled treatment of alternatives' names in either the design or the presentation phase of the choice experiment does not statistically affect the estimates of the coefficient parameters. We then proceed to measure the influence toward the willingness-to-pay (WTP) estimates. By using a random-effects model to relate the conditional WTP estimates to the socioeconomic characteristics of the individuals and the labeled versus unlabeled treatments of alternatives' names, we find that: a) Given the treatment of alternatives' names in the presentation phase, the treatment of alternatives' names in the design phase does not statistically affect the estimates of the WTP measures; and b) Given the treatment of alternatives' names in the design phase, the labeled treatment of alternatives' names in the presentation phase causes the corresponding WTP estimates to be slightly higher.
Simulation of design-unbiased point-to-particle sampling compared to alternatives on plantation rows
Thomas B. Lynch; David Hamlin; Mark J. Ducey
2016-01-01
Total quantities of tree attributes can be estimated in plantations by sampling on plantation rows using several methods. At random sample points on a row, either fixed row lengths or variable row lengths with a fixed number of sample trees can be assessed. Ratio of means or mean of ratios estimators can be developed for the fixed number of trees option but are not...
Feature Grouping and Selection Over an Undirected Graph.
Yang, Sen; Yuan, Lei; Lai, Ying-Cheng; Shen, Xiaotong; Wonka, Peter; Ye, Jieping
2012-01-01
High-dimensional regression/classification continues to be an important and challenging problem, especially when features are highly correlated. Feature selection, combined with additional structure information on the features has been considered to be promising in promoting regression/classification performance. Graph-guided fused lasso (GFlasso) has recently been proposed to facilitate feature selection and graph structure exploitation, when features exhibit certain graph structures. However, the formulation in GFlasso relies on pairwise sample correlations to perform feature grouping, which could introduce additional estimation bias. In this paper, we propose three new feature grouping and selection methods to resolve this issue. The first method employs a convex function to penalize the pairwise l ∞ norm of connected regression/classification coefficients, achieving simultaneous feature grouping and selection. The second method improves the first one by utilizing a non-convex function to reduce the estimation bias. The third one is the extension of the second method using a truncated l 1 regularization to further reduce the estimation bias. The proposed methods combine feature grouping and feature selection to enhance estimation accuracy. We employ the alternating direction method of multipliers (ADMM) and difference of convex functions (DC) programming to solve the proposed formulations. Our experimental results on synthetic data and two real datasets demonstrate the effectiveness of the proposed methods.
Nixon, Richard M; Wonderling, David; Grieve, Richard D
2010-03-01
Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.
Determination of VA health care costs.
Barnett, Paul G
2003-09-01
In the absence of billing data, alternative methods are used to estimate the cost of hospital stays, outpatient visits, and treatment innovations in the U.S. Department of Veterans Affairs (VA). The choice of method represents a trade-off between accuracy and research cost. The direct measurement method gathers information on staff activities, supplies, equipment, space, and workload. Since it is expensive, direct measurement should be reserved for finding short-run costs, evaluating provider efficiency, or determining the cost of treatments that are innovative or unique to VA. The pseudo-bill method combines utilization data with a non-VA reimbursement schedule. The cost regression method estimates the cost of VA hospital stays by applying the relationship between cost and characteristics of non-VA hospitalizations. The Health Economics Resource Center uses pseudo-bill and cost regression methods to create an encounter-level database of VA costs. Researchers are also beginning to use the VA activity-based cost allocation system.
Crop area estimation based on remotely-sensed data with an accurate but costly subsample
NASA Technical Reports Server (NTRS)
Gunst, R. F.
1983-01-01
Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.
An Alternative Two Stage Least Squares (2SLS) Estimator for Latent Variable Equations.
ERIC Educational Resources Information Center
Bollen, Kenneth A.
1996-01-01
An alternative two-stage least squares (2SLS) estimator of the parameters in LISREL type models is proposed and contrasted with existing estimators. The new 2SLS estimator allows observed and latent variables to originate from nonnormal distributions, is consistent, has a known asymptotic covariance matrix, and can be estimated with standard…
Methods for the accurate estimation of confidence intervals on protein folding ϕ-values
Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.
2006-01-01
ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714
Pittman, Jeremy Joshua; Arnall, Daryl Brian; Interrante, Sindy M.; Moffet, Corey A.; Butler, Twain J.
2015-01-01
Non-destructive biomass estimation of vegetation has been performed via remote sensing as well as physical measurements. An effective method for estimating biomass must have accuracy comparable to the accepted standard of destructive removal. Estimation or measurement of height is commonly employed to create a relationship between height and mass. This study examined several types of ground-based mobile sensing strategies for forage biomass estimation. Forage production experiments consisting of alfalfa (Medicago sativa L.), bermudagrass [Cynodon dactylon (L.) Pers.], and wheat (Triticum aestivum L.) were employed to examine sensor biomass estimation (laser, ultrasonic, and spectral) as compared to physical measurements (plate meter and meter stick) and the traditional harvest method (clipping). Predictive models were constructed via partial least squares regression and modeled estimates were compared to the physically measured biomass. Least significant difference separated mean estimates were examined to evaluate differences in the physical measurements and sensor estimates for canopy height and biomass. Differences between methods were minimal (average percent error of 11.2% for difference between predicted values versus machine and quadrat harvested biomass values (1.64 and 4.91 t·ha−1, respectively), except at the lowest measured biomass (average percent error of 89% for harvester and quad harvested biomass < 0.79 t·ha−1) and greatest measured biomass (average percent error of 18% for harvester and quad harvested biomass >6.4 t·ha−1). These data suggest that using mobile sensor-based biomass estimation models could be an effective alternative to the traditional clipping method for rapid, accurate in-field biomass estimation. PMID:25635415
Improving RNA-Seq expression estimation by modeling isoform- and exon-specific read sequencing rate.
Liu, Xuejun; Shi, Xinxin; Chen, Chunlin; Zhang, Li
2015-10-16
The high-throughput sequencing technology, RNA-Seq, has been widely used to quantify gene and isoform expression in the study of transcriptome in recent years. Accurate expression measurement from the millions or billions of short generated reads is obstructed by difficulties. One is ambiguous mapping of reads to reference transcriptome caused by alternative splicing. This increases the uncertainty in estimating isoform expression. The other is non-uniformity of read distribution along the reference transcriptome due to positional, sequencing, mappability and other undiscovered sources of biases. This violates the uniform assumption of read distribution for many expression calculation approaches, such as the direct RPKM calculation and Poisson-based models. Many methods have been proposed to address these difficulties. Some approaches employ latent variable models to discover the underlying pattern of read sequencing. However, most of these methods make bias correction based on surrounding sequence contents and share the bias models by all genes. They therefore cannot estimate gene- and isoform-specific biases as revealed by recent studies. We propose a latent variable model, NLDMseq, to estimate gene and isoform expression. Our method adopts latent variables to model the unknown isoforms, from which reads originate, and the underlying percentage of multiple spliced variants. The isoform- and exon-specific read sequencing biases are modeled to account for the non-uniformity of read distribution, and are identified by utilizing the replicate information of multiple lanes of a single library run. We employ simulation and real data to verify the performance of our method in terms of accuracy in the calculation of gene and isoform expression. Results show that NLDMseq obtains competitive gene and isoform expression compared to popular alternatives. Finally, the proposed method is applied to the detection of differential expression (DE) to show its usefulness in the downstream analysis. The proposed NLDMseq method provides an approach to accurately estimate gene and isoform expression from RNA-Seq data by modeling the isoform- and exon-specific read sequencing biases. It makes use of a latent variable model to discover the hidden pattern of read sequencing. We have shown that it works well in both simulations and real datasets, and has competitive performance compared to popular methods. The method has been implemented as a freely available software which can be found at https://github.com/PUGEA/NLDMseq.
Robust w-Estimators for Cryo-EM Class Means.
Huang, Chenxi; Tagare, Hemant D
2016-02-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the class mean, improves the signal-to-noise ratio in single-particle reconstruction. The averaging step is often compromised because of the outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods are done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a w-estimator of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers.
Achieving metrological precision limits through postselection
NASA Astrophysics Data System (ADS)
Alves, G. Bié; Pimentel, A.; Hor-Meyll, M.; Walborn, S. P.; Davidovich, L.; Filho, R. L. de Matos
2017-01-01
Postselection strategies have been proposed with the aim of amplifying weak signals, which may help to overcome detection thresholds associated with technical noise in high-precision measurements. Here we use an optical setup to experimentally explore two different postselection protocols for the estimation of a small parameter: a weak-value amplification procedure and an alternative method that does not provide amplification but nonetheless is shown to be more robust for the sake of parameter estimation. Each technique leads approximately to the saturation of quantum limits for the estimation precision, expressed by the Cramér-Rao bound. For both situations, we show that parameter estimation is improved when the postselection statistics are considered together with the measurement device.
Thiel, A; Etheve, S; Fabian, E; Leeman, W R; Plautz, J R
2015-10-01
Consumer health risk assessment for feed additives is based on the estimated human exposure to the additive that may occur in livestock edible tissues compared to its hazard. We present an approach using alternative methods for consumer health risk assessment. The aim was to use the fewest possible number of animals to estimate its hazard and human exposure without jeopardizing the safety upon use. As an example we selected the feed flavoring substance piperine and applied in silico modeling for residue estimation, results from literature surveys, and Read-Across to assess metabolism in different species. Results were compared to experimental in vitro metabolism data in rat and chicken, and to quantitative analysis of residues' levels from the in vivo situation in livestock. In silico residue modeling showed to be a worst case: the modeled residual levels were considerably higher than the measured residual levels. The in vitro evaluation of livestock versus rodent metabolism revealed no major differences in metabolism between the species. We successfully performed a consumer health risk assessment without performing additional animal experiments. As shown, the use and combination of different alternative methods supports animal welfare consideration and provides future perspective to reducing the number of animals. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Reply to Efford on ‘Integrating resource selection information with spatial capture-recapture’
Royle, Andy; Chandler, Richard; Sun, Catherine C.; Fuller, Angela K.
2014-01-01
3. A key point of Royle et al. (Methods in Ecology and Evolution, 2013, 4) was that active resource selection induces heterogeneity in encounter probability which, if unaccounted for, should bias estimates of population size or density. The models of Royle et al. (Methods in Ecology and Evolution, 2013, 4) and Efford (Methods in Ecology and Evolution, 2014, 000, 000) merely amount to alternative models of resource selection, and hence varying amounts of heterogeneity in encounter probability.
Prediction of overall and blade-element performance for axial-flow pump configurations
NASA Technical Reports Server (NTRS)
Serovy, G. K.; Kavanagh, P.; Okiishi, T. H.; Miller, M. J.
1973-01-01
A method and a digital computer program for prediction of the distributions of fluid velocity and properties in axial flow pump configurations are described and evaluated. The method uses the blade-element flow model and an iterative numerical solution of the radial equilbrium and continuity conditions. Correlated experimental results are used to generate alternative methods for estimating blade-element turning and loss characteristics. Detailed descriptions of the computer program are included, with example input and typical computed results.
Last menstrual period provides the best estimate of gestation length for women in rural Guatemala.
Neufeld, Lynnette M; Haas, Jere D; Grajéda, Ruben; Martorell, Reynaldo
2006-07-01
The accurate estimation of gestational age in field studies in rural areas of developing countries continues to present difficulties for researchers. Our objective was to determine the best method for gestational age estimation in rural Guatemala. Women of childbearing age from four communities in rural Guatemala were invited to participate in a longitudinal study. Gestational age at birth was determined by an early second trimester measure of biparietal diameter, last menstrual period (LMP), the Capurro neonatal examination and symphysis-fundus height (SFH) for 171 women-infant pairs. Regression modelling was used to determine which method provided the best estimate of gestational age using ultrasound as the reference. Gestational age estimated by LMP was within +/-14 days of the ultrasound estimate for 94% of the sample. LMP-estimated gestational age explained 46% of the variance in gestational age estimated by ultrasound whereas the neonatal examination explained only 20%. The results of this study suggest that, when trained field personnel assist women to recall their date of LMP, this date provides the best estimate of gestational age. SFH measured during the second trimester may provide a reasonable alternative when LMP is unavailable.
Psychophysical Reverse Correlation with Multiple Response Alternatives
Dai, Huanping; Micheyl, Christophe
2011-01-01
Psychophysical reverse-correlation methods such as the “classification image” technique provide a unique tool to uncover the internal representations and decision strategies of individual participants in perceptual tasks. Over the last thirty years, these techniques have gained increasing popularity among both visual and auditory psychophysicists. However, thus far, principled applications of the psychophysical reverse-correlation approach have been almost exclusively limited to two-alternative decision (detection or discrimination) tasks. Whether and how reverse-correlation methods can be applied to uncover perceptual templates and decision strategies in situations involving more than just two response alternatives remains largely unclear. Here, the authors consider the problem of estimating perceptual templates and decision strategies in stimulus identification tasks with multiple response alternatives. They describe a modified correlational approach, which can be used to solve this problem. The approach is evaluated under a variety of simulated conditions, including different ratios of internal-to-external noise, different degrees of correlations between the sensory observations, and various statistical distributions of stimulus perturbations. The results indicate that the proposed approach is reasonably robust, suggesting that it could be used in future empirical studies. PMID:20695712
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
Comparison of gestational dating methods and implications ...
OBJECTIVES: Estimating gestational age is usually based on date of last menstrual period (LMP) or clinical estimation (CE); both approaches introduce potential bias. Differences in methods of estimation may lead to misclassificat ion and inconsistencies in risk estimates, particularly if exposure assignment is also gestation-dependent. This paper examines a'what-if' scenario in which alternative methods are used and attempts to elucidate how method choice affects observed results.METHODS: We constructed two 20-week gestational age cohorts of pregnancies between 2000 and 2005 (New Jersey, Pennsylvania, Ohio, USA) using live birth certificates : one defined preterm birth (PTB) status using CE and one using LMP. Within these, we estimated risk for 4 categories of preterm birth (PTBs per 106 pregnancies) and risk differences (RD (95% Cl s)) associated with exposure to particulate matter (PM2. 5).RESULTS: More births were classified preterm using LMP (16%) compared with CE (8%). RD divergences increased between cohorts as exposure period approached delivery. Among births between 28 and 31 weeks, week 7 PM2.5 exposure conveyed RDs of 44 (21 to 67) for CE and 50 (18 to 82) for LMP populations, while week 24 exposure conveyed RDs of 33 (11 to 56) and -20 (-50 to 10), respectively.CONCLUSIONS: Different results from analyses restricted to births with both CE and LMP are most likely due to differences in dating methods rather than selection issues. Results are sensitive t
8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A
Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources.more » We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.« less
Non-invasive Fetal ECG Signal Quality Assessment for Multichannel Heart Rate Estimation.
Andreotti, Fernando; Graser, Felix; Malberg, Hagen; Zaunseder, Sebastian
2017-12-01
The noninvasive fetal ECG (NI-FECG) from abdominal recordings offers novel prospects for prenatal monitoring. However, NI-FECG signals are corrupted by various nonstationary noise sources, making the processing of abdominal recordings a challenging task. In this paper, we present an online approach that dynamically assess the quality of NI-FECG to improve fetal heart rate (FHR) estimation. Using a naive Bayes classifier, state-of-the-art and novel signal quality indices (SQIs), and an existing adaptive Kalman filter, FHR estimation was improved. For the purpose of training and validating the proposed methods, a large annotated private clinical dataset was used. The suggested classification scheme demonstrated an accuracy of Krippendorff's alpha in determining the overall quality of NI-FECG signals. The proposed Kalman filter outperformed alternative methods for FHR estimation achieving accuracy. The proposed algorithm was able to reliably reflect changes of signal quality and can be used in improving FHR estimation. NI-ECG signal quality estimation and multichannel information fusion are largely unexplored topics. Based on previous works, multichannel FHR estimation is a field that could strongly benefit from such methods. The developed SQI algorithms as well as resulting classifier were made available under a GNU GPL open-source license and contributed to the FECGSYN toolbox.
Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.
Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L
2017-06-13
λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.
Objectivity and validity of EMG method in estimating anaerobic threshold.
Kang, S-K; Kim, J; Kwon, M; Eom, H
2014-08-01
The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.
Implementing a method of screening one-well hydraulic barrier design alternatives.
Rubin, Hillel; Shoemaker, Christine A; Köngeter, Jürgen
2009-01-01
This article provides details of applying the method developed by the authors (Rubin et al. 2008b) for screening one-well hydraulic barrier design alternatives. The present article with its supporting information (manual and electronic spreadsheets with a case history example) provides the reader complete details and examples of solving the set of nonlinear equations developed by Rubin et al. (2008b). It allows proper use of the analytical solutions and also depicting the various charts given by Rubin et al. (2008b). The final outputs of the calculations are the required position and the discharge of the pumping well. If the contaminant source is nonaqueous phase liquid (NAPL) entrapped within the aquifer, then the method provides an estimate of the aquifer remediation progress (which is a by-product) due to operating the hydraulic barrier.
Thompson, W.L.
2003-01-01
Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled stream units. Violations of these assumptions may produce suspect results. To determine possible sources of the assumption violations, I used data on the abundance of steelhead Oncorhynchus mykiss from Hankin and Reeves' (1988) in a simulation composed of 50,000 repeated, stratified systematic random samples from a spatially clustered distribution. The simulation was used to investigate effects of a range of removal estimates, from 75% to 100% of true fish abundance, on overall stream fish population estimates. The effects of various categories of removal-estimates-to-snorkel-count correlation levels (r = 0.75-1.0) on fish population estimates were also explored. Simulation results indicated that Hankin and Reeves' approach may produce poor results unless removal estimates exceed at least 85% of the true number of fish within sampled units and unless correlations between removal estimates and snorkel counts are at least 0.90. A potential modification to Hankin and Reeves' approach is the inclusion of environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative approach is to use snorkeling combined with line transect sampling to estimate fish densities within stream units. As with any method of population estimation, a pilot study should be conducted to evaluate its usefulness, which requires a known (or nearly so) population of fish to serve as a benchmark for evaluating bias and precision of estimators.
Confidence Intervals for True Scores Using the Skew-Normal Distribution
ERIC Educational Resources Information Center
Garcia-Perez, Miguel A.
2010-01-01
A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…
Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.
ERIC Educational Resources Information Center
Olson, Jeffery E.
Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…
40 CFR 60.493 - Performance test and compliance provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... equivalent or alternative method. The owner or operator shall determine from company records the volume of... estimate the volume of coating used at each facility by using the average dry weight of coating, number of... acceptable to the Administrator. (i) Calculate the volume-weighted average of the total mass of VOC per...
Estimating the cost of sea lice to salmon aquaculture in eastern Canada.
Mustafa, A; Rankaduwa, W; Campbell, P
2001-01-01
Parasitic sea lice are serious problems in aquaculture. The true cost of these parasites is unknown. We demonstrate the economic burden imposed by sea lice, so that researchers, aquatic specialists, and policy makers can approximate the economic cost of this problem and work towards developing alternative control methods. PMID:11195524
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
Asquith, William H.
2014-01-01
The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
Uncertainties in obtaining high reliability from stress-strength models
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.
1992-01-01
There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.
Robust PV Degradation Methodology and Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Dirk; Deline, Christopher A; Kurtz, Sarah
The degradation rate plays an important role in predicting and assessing the long-term energy generation of PV systems. Many methods have been proposed for extracting the degradation rate from operational data of PV systems, but most of the published approaches are susceptible to bias due to inverter clipping, module soiling, temporary outages, seasonality, and sensor degradation. In this manuscript, we propose a methodology for determining PV degradation leveraging available modeled clear-sky irradiance data rather than site sensor data, and a robust year-over-year (YOY) rate calculation. We show the method to provide reliable degradation rate estimates even in the case ofmore » sensor drift, data shifts, and soiling. Compared with alternate methods, we demonstrate that the proposed method delivers the lowest uncertainty in degradation rate estimates for a fleet of 486 PV systems.« less
Robust PV Degradation Methodology and Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Dirk C.; Deline, Chris; Kurtz, Sarah R.
The degradation rate plays an important role in predicting and assessing the long-term energy generation of photovoltaics (PV) systems. Many methods have been proposed for extracting the degradation rate from operational data of PV systems, but most of the published approaches are susceptible to bias due to inverter clipping, module soiling, temporary outages, seasonality, and sensor degradation. In this paper, we propose a methodology for determining PV degradation leveraging available modeled clear-sky irradiance data rather than site sensor data, and a robust year-over-year rate calculation. We show the method to provide reliable degradation rate estimates even in the case ofmore » sensor drift, data shifts, and soiling. Compared with alternate methods, we demonstrate that the proposed method delivers the lowest uncertainty in degradation rate estimates for a fleet of 486 PV systems.« less
Robust PV Degradation Methodology and Application
Jordan, Dirk C.; Deline, Chris; Kurtz, Sarah R.; ...
2017-12-21
The degradation rate plays an important role in predicting and assessing the long-term energy generation of photovoltaics (PV) systems. Many methods have been proposed for extracting the degradation rate from operational data of PV systems, but most of the published approaches are susceptible to bias due to inverter clipping, module soiling, temporary outages, seasonality, and sensor degradation. In this paper, we propose a methodology for determining PV degradation leveraging available modeled clear-sky irradiance data rather than site sensor data, and a robust year-over-year rate calculation. We show the method to provide reliable degradation rate estimates even in the case ofmore » sensor drift, data shifts, and soiling. Compared with alternate methods, we demonstrate that the proposed method delivers the lowest uncertainty in degradation rate estimates for a fleet of 486 PV systems.« less
Total generalized variation-regularized variational model for single image dehazing
NASA Astrophysics Data System (ADS)
Shu, Qiao-Ling; Wu, Chuan-Sheng; Zhong, Qiu-Xiang; Liu, Ryan Wen
2018-04-01
Imaging quality is often significantly degraded under hazy weather condition. The purpose of this paper is to recover the latent sharp image from its hazy version. It is well known that the accurate estimation of depth information could assist in improving dehazing performance. In this paper, a detail-preserving variational model was proposed to simultaneously estimate haze-free image and depth map. In particular, the total variation (TV) and total generalized variation (TGV) regularizers were introduced to restrain haze-free image and depth map, respectively. The resulting nonsmooth optimization problem was efficiently solved using the alternating direction method of multipliers (ADMM). Comprehensive experiments have been conducted on realistic datasets to compare our proposed method with several state-of-the-art dehazing methods. Results have illustrated the superior performance of the proposed method in terms of visual quality evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, G N; Petin, A N
2016-03-31
We report the results of studies on the isotope-selective infrared multiphoton dissociation (IR MFD) of SF{sub 6} and CF{sub 3}I molecules in a pulsed, gas-dynamically cooled molecular flow interacting with a solid surface. The productivity of this method in the conditions of a specific experiment (by the example of SF{sub 6} molecules) is evaluated. A number of low-energy methods of molecular laser isotope separation based on the use of infrared lasers for selective excitation of molecules are analysed and their productivity is estimated. The methods are compared with those of selective dissociation of molecules in the flow interacting with amore » surface. The advantages of this method compared to the low-energy methods of molecular laser isotope separation and the IR MPD method in the unperturbed jets and flows are shown. It is concluded that this method could be a promising alternative to the low-energy methods of molecular laser isotope separation. (laser separation of isotopes)« less
NASA Astrophysics Data System (ADS)
Wu, Z. Y.; Zhang, L.; Wang, X. M.; Munger, J. W.
2015-07-01
Small pollutant concentration gradients between levels above a plant canopy result in large uncertainties in estimated air-surface exchange fluxes when using existing micrometeorological gradient methods, including the aerodynamic gradient method (AGM) and the modified Bowen ratio method (MBR). A modified micrometeorological gradient method (MGM) is proposed in this study for estimating O3 dry deposition fluxes over a forest canopy using concentration gradients between a level above and a level below the canopy top, taking advantage of relatively large gradients between these levels due to significant pollutant uptake in the top layers of the canopy. The new method is compared with the AGM and MBR methods and is also evaluated using eddy-covariance (EC) flux measurements collected at the Harvard Forest Environmental Measurement Site, Massachusetts, during 1993-2000. All three gradient methods (AGM, MBR, and MGM) produced similar diurnal cycles of O3 dry deposition velocity (Vd(O3)) to the EC measurements, with the MGM method being the closest in magnitude to the EC measurements. The multi-year average Vd(O3) differed significantly between these methods, with the AGM, MBR, and MGM method being 2.28, 1.45, and 1.18 times that of the EC, respectively. Sensitivity experiments identified several input parameters for the MGM method as first-order parameters that affect the estimated Vd(O3). A 10% uncertainty in the wind speed attenuation coefficient or canopy displacement height can cause about 10% uncertainty in the estimated Vd(O3). An unrealistic leaf area density vertical profile can cause an uncertainty of a factor of 2.0 in the estimated Vd(O3). Other input parameters or formulas for stability functions only caused an uncertainly of a few percent. The new method provides an alternative approach to monitoring/estimating long-term deposition fluxes of similar pollutants over tall canopies.
Robust fitting for neuroreceptor mapping.
Chang, Chung; Ogden, R Todd
2009-03-15
Among many other uses, positron emission tomography (PET) can be used in studies to estimate the density of a neuroreceptor at each location throughout the brain by measuring the concentration of a radiotracer over time and modeling its kinetics. There are a variety of kinetic models in common usage and these typically rely on nonlinear least-squares (LS) algorithms for parameter estimation. However, PET data often contain artifacts (such as uncorrected head motion) and so the assumptions on which the LS methods are based may be violated. Quantile regression (QR) provides a robust alternative to LS methods and has been used successfully in many applications. We consider fitting various kinetic models to PET data using QR and study the relative performance of the methods via simulation. A data adaptive method for choosing between LS and QR is proposed and the performance of this method is also studied.
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Thorndahl, S; Willems, P
2008-01-01
Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.
Klingbeil, Brian T; Willig, Michael R
2015-01-01
Effective monitoring programs for biodiversity are needed to assess trends in biodiversity and evaluate the consequences of management. This is particularly true for birds and faunas that occupy interior forest and other areas of low human population density, as these are frequently under-sampled compared to other habitats. For birds, Autonomous Recording Units (ARUs) have been proposed as a supplement or alternative to point counts made by human observers to enhance monitoring efforts. We employed two strategies (i.e., simultaneous-collection and same-season) to compare point count and ARU methods for quantifying species richness and composition of birds in temperate interior forests. The simultaneous-collection strategy compares surveys by ARUs and point counts, with methods matched in time, location, and survey duration such that the person and machine simultaneously collect data. The same-season strategy compares surveys from ARUs and point counts conducted at the same locations throughout the breeding season, but methods differ in the number, duration, and frequency of surveys. This second strategy more closely follows the ways in which monitoring programs are likely to be implemented. Site-specific estimates of richness (but not species composition) differed between methods; however, the nature of the relationship was dependent on the assessment strategy. Estimates of richness from point counts were greater than estimates from ARUs in the simultaneous-collection strategy. Woodpeckers in particular, were less frequently identified from ARUs than point counts with this strategy. Conversely, estimates of richness were lower from point counts than ARUs in the same-season strategy. Moreover, in the same-season strategy, ARUs detected the occurrence of passerines at a higher frequency than did point counts. Differences between ARU and point count methods were only detected in site-level comparisons. Importantly, both methods provide similar estimates of species richness and composition for the region. Consequently, if single visits to sites or short-term monitoring are the goal, point counts will likely perform better than ARUs, especially if species are rare or vocalize infrequently. However, if seasonal or annual monitoring of sites is the goal, ARUs offer a viable alternative to standard point-count methods, especially in the context of large-scale or long-term monitoring of temperate forest birds.
Alanso, Robert S.; McClintock, Brett T.; Lyren, Lisa M.; Boydston, Erin E.; Crooks, Kevin R.
2015-01-01
Abundance estimation of carnivore populations is difficult and has prompted the use of non-invasive detection methods, such as remotely-triggered cameras, to collect data. To analyze photo data, studies focusing on carnivores with unique pelage patterns have utilized a mark-recapture framework and studies of carnivores without unique pelage patterns have used a mark-resight framework. We compared mark-resight and mark-recapture estimation methods to estimate bobcat (Lynx rufus) population sizes, which motivated the development of a new "hybrid" mark-resight model as an alternative to traditional methods. We deployed a sampling grid of 30 cameras throughout the urban southern California study area. Additionally, we physically captured and marked a subset of the bobcat population with GPS telemetry collars. Since we could identify individual bobcats with photos of unique pelage patterns and a subset of the population was physically marked, we were able to use traditional mark-recapture and mark-resight methods, as well as the new “hybrid” mark-resight model we developed to estimate bobcat abundance. We recorded 109 bobcat photos during 4,669 camera nights and physically marked 27 bobcats with GPS telemetry collars. Abundance estimates produced by the traditional mark-recapture, traditional mark-resight, and “hybrid” mark-resight methods were similar, however precision differed depending on the models used. Traditional mark-recapture and mark-resight estimates were relatively imprecise with percent confidence interval lengths exceeding 100% of point estimates. Hybrid mark-resight models produced better precision with percent confidence intervals not exceeding 57%. The increased precision of the hybrid mark-resight method stems from utilizing the complete encounter histories of physically marked individuals (including those never detected by a camera trap) and the encounter histories of naturally marked individuals detected at camera traps. This new estimator may be particularly useful for estimating abundance of uniquely identifiable species that are difficult to sample using camera traps alone.
Statistical Application and Cost Saving in a Dental Survey
Chyou, Po-Huang; Schroeder, Dixie; Schwei, Kelsey; Acharya, Amit
2017-01-01
Objective To effectively achieve a robust survey response rate in a timely manner, an alternative approach to survey distribution, informed by statistical modeling, was applied to efficiently and cost-effectively achieve the targeted rate of return. Design A prospective environmental scan surveying adoption of health information technology utilization within their practices was undertaken in a national pool of dental professionals (N=8000) using an alternative method of sampling. The piloted approach to rate of cohort sampling targeted a response rate of 400 completed surveys from among randomly targeted eligible providers who were contacted using replicated subsampling leveraging mailed surveys. Methods Two replicated subsample mailings (n=1000 surveys/mailings) were undertaken to project the true response rate and estimate the total number of surveys required to achieve the final target. Cost effectiveness and non-response bias analyses were performed. Results The final mailing required approximately 24% fewer mailings compared to targeting of the entire cohort, with a final survey capture exceeding the expected target. An estimated $5000 in cost savings was projected by applying the alternative approach. Non-response analyses found no evidence of bias relative to demographics, practice demographics, or topically-related survey questions. Conclusion The outcome of this pilot study suggests that this approach to survey studies will accomplish targeted enrollment in a cost effective manner. Future studies are needed to validate this approach in the context of other survey studies. PMID:28373286
Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo
2016-01-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134
Sieve estimation in a Markov illness-death process under dual censoring.
Boruvka, Audrey; Cook, Richard J
2016-04-01
Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Nunes, Rita G; Hajnal, Joseph V
2018-06-01
Point spread function (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.
Language evolution and human history: what a difference a date makes.
Gray, Russell D; Atkinson, Quentin D; Greenhill, Simon J
2011-04-12
Historical inference is at its most powerful when independent lines of evidence can be integrated into a coherent account. Dating linguistic and cultural lineages can potentially play a vital role in the integration of evidence from linguistics, anthropology, archaeology and genetics. Unfortunately, although the comparative method in historical linguistics can provide a relative chronology, it cannot provide absolute date estimates and an alternative approach, called glottochronology, is fundamentally flawed. In this paper we outline how computational phylogenetic methods can reliably estimate language divergence dates and thus help resolve long-standing debates about human prehistory ranging from the origin of the Indo-European language family to the peopling of the Pacific.
Language evolution and human history: what a difference a date makes
Gray, Russell D.; Atkinson, Quentin D.; Greenhill, Simon J.
2011-01-01
Historical inference is at its most powerful when independent lines of evidence can be integrated into a coherent account. Dating linguistic and cultural lineages can potentially play a vital role in the integration of evidence from linguistics, anthropology, archaeology and genetics. Unfortunately, although the comparative method in historical linguistics can provide a relative chronology, it cannot provide absolute date estimates and an alternative approach, called glottochronology, is fundamentally flawed. In this paper we outline how computational phylogenetic methods can reliably estimate language divergence dates and thus help resolve long-standing debates about human prehistory ranging from the origin of the Indo-European language family to the peopling of the Pacific. PMID:21357231
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
NASA Astrophysics Data System (ADS)
Lindley, S. J.; Longhurst, J. W. S.; Watson, A. F. R.; Conlan, D. E.
This paper considers the value of applying an alternative pro rata methodology to the estimation of atmospheric emissions from a given regional or local area. Such investigations into less time and resource intensive means of providing estimates in comparison to traditional methods are important due to the potential role of new methods in the development of air quality management plans. A pro rata approach is used here to estimate emissions of SO 2, NO x, CO, CO 2, VOCs and black smoke from all sources and Pb from transportation for the North West region of England. This method has the advantage of using readily available data as well as being an easily repeatable procedure which provides a good indication of emissions to be expected from a particular geographical region. This can then provide the impetus for further emission studies and ultimately a regional/local air quality management plan. Results suggest that between 1987 and 1991 trends in the emissions of the pollutants considered have been less favourable in the North West region than in the nation as a whole.
Arctic curves in path models from the tangent method
NASA Astrophysics Data System (ADS)
Di Francesco, Philippe; Lapa, Matthew F.
2018-04-01
Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.
Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons
Cemgil, Ali Taylan
2017-01-01
We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking. PMID:29109375
Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons.
Daniş, F Serhan; Cemgil, Ali Taylan
2017-10-29
We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod
2009-09-15
Measurement of strain, curvature, and twist of a deformed object play an important role in deformation analysis. Strain depends on the first order displacement derivative, whereas curvature and twist are determined by second order displacement derivatives. This paper proposes a pseudo-Wigner-Ville distribution based method for measurement of strain, curvature, and twist in digital holographic interferometry where the object deformation or displacement is encoded as interference phase. In the proposed method, the phase derivative is estimated by peak detection of pseudo-Wigner-Ville distribution evaluated along each row/column of the reconstructed interference field. A complex exponential signal with unit amplitude and the phasemore » derivative estimate as the argument is then generated and the pseudo-Wigner-Ville distribution along each row/column of this signal is evaluated. The curvature is estimated by using peak tracking strategy for the new distribution. For estimation of twist, the pseudo-Wigner-Ville distribution is evaluated along each column/row (i.e., in alternate direction with respect to the previous one) for the generated complex exponential signal and the corresponding peak detection gives the twist estimate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1979-10-01
The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less
Carnegie, Nicole Bohme
2011-04-15
The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from −6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.
NASA Astrophysics Data System (ADS)
Zhang, Wen-Yan; Lin, Chao-Yuan
2017-04-01
The Soil Conservation Service Curve Number (SCS-CN) method, which was originally developed by the USDA Natural Resources Conservation Service, is widely used to estimate direct runoff volume from rainfall. The runoff Curve Number (CN) parameter is based on the hydrologic soil group and land use factors. In Taiwan, the national land use maps were interpreted from aerial photos in 1995 and 2008. Rapid updating of post-disaster land use map is limited due to the high cost of production, so the classification of satellite images is the alternative method to obtain the land use map. In this study, Normalized Difference Vegetation Index (NDVI) in Chen-You-Lan Watershed was derived from dry and wet season of Landsat imageries during 2003 - 2008. Land covers were interpreted from mean value and standard deviation of NDVI and were categorized into 4 groups i.e. forest, grassland, agriculture and bare land. Then, the runoff volume of typhoon events during 2005 - 2009 were estimated using SCS-CN method and verified with the measured runoff data. The result showed that the model efficiency coefficient is 90.77%. Therefore, estimating runoff by using the land cover map classified from satellite images is practicable.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F
2010-07-19
A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic.
2010-01-01
Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic. PMID:20642827
Lincoln estimates of mallard (Anas platyrhynchos) abundance in North America.
Alisauskas, Ray T; Arnold, Todd W; Leafloor, James O; Otis, David L; Sedinger, James S
2014-01-01
Estimates of range-wide abundance, harvest, and harvest rate are fundamental for sound inferences about the role of exploitation in the dynamics of free-ranging wildlife populations, but reliability of existing survey methods for abundance estimation is rarely assessed using alternative approaches. North American mallard populations have been surveyed each spring since 1955 using internationally coordinated aerial surveys, but population size can also be estimated with Lincoln's method using banding and harvest data. We estimated late summer population size of adult and juvenile male and female mallards in western, midcontinent, and eastern North America using Lincoln's method of dividing (i) total estimated harvest, [Formula: see text], by estimated harvest rate, [Formula: see text], calculated as (ii) direct band recovery rate, [Formula: see text], divided by the (iii) band reporting rate, [Formula: see text]. Our goal was to compare estimates based on Lincoln's method with traditional estimates based on aerial surveys. Lincoln estimates of adult males and females alive in the period June-September were 4.0 (range: 2.5-5.9), 1.8 (range: 0.6-3.0), and 1.8 (range: 1.3-2.7) times larger than respective aerial survey estimates for the western, midcontinent, and eastern mallard populations, and the two population estimates were only modestly correlated with each other (western: r = 0.70, 1993-2011; midcontinent: r = 0.54, 1961-2011; eastern: r = 0.50, 1993-2011). Higher Lincoln estimates are predictable given that the geographic scope of inference from Lincoln estimates is the entire population range, whereas sampling frames for aerial surveys are incomplete. Although each estimation method has a number of important potential biases, our review suggests that underestimation of total population size by aerial surveys is the most likely explanation. In addition to providing measures of total abundance, Lincoln's method provides estimates of fecundity and population sex ratio and could be used in integrated population models to provide greater insights about population dynamics and management of North American mallards and most other harvested species.
Coates, Jennifer; Colaiezzi, Brooke; Fiedler, John L; Wirth, James; Lividini, Keith; Rogers, Beatrice
2012-09-01
Dietary assessment data are essential for designing, monitoring, and evaluating food fortification and other food-based nutrition programs. Planners and managers must understand the validity, usefulness, and cost tradeoffs of employing alternative dietary assessment methods, but little guidance exists. To identify and apply criteria to assess the tradeoffs of using alternative dietary methods for meeting fortification programming needs. Twenty-five semistructured expert interviews were conducted and literature was reviewed for information on the validity, usefulness, and cost of using 24-hour recalls, Food Frequency Questionnaires/Fortification Rapid Assessment Tool (FFQ/FRAT), Food Balance Sheets (FBS), and Household Consumption and Expenditures Surveys (HCES) for program stage-specific information needs. Criteria were developed and applied to construct relative rankings of the four methods. Needs assessment: HCES offers the greatest suitability at the lowest cost for estimating the risk of inadequate intakes, but relative to 24-hour recall compromises validity. HCES should be used to identify vehicles and to estimate coverage and likely impact due to its low cost and moderate-to-high validity. Baseline assessment: 24-hour recall should be applied using a representative sample. Monitoring: A simple, low-cost FFQ can be used to monitor coverage. Impact evaluation: 24-hour recall should be used to assess changes in nutrient intakes. FBS have low validity relative to other methods for all programmatic purposes. Each dietary assessment method has strengths and weaknesses that vary by context and purpose. Method selection must be driven by the program's data needs, the suitability of the methods for the purpose, and a clear understanding of the tradeoffs involved.
Alternative Statistical Frameworks for Student Growth Percentile Estimation
ERIC Educational Resources Information Center
Lockwood, J. R.; Castellano, Katherine E.
2015-01-01
This article suggests two alternative statistical approaches for estimating student growth percentiles (SGP). The first is to estimate percentile ranks of current test scores conditional on past test scores directly, by modeling the conditional cumulative distribution functions, rather than indirectly through quantile regressions. This would…
Peng, Mei; Jaeger, Sara R; Hautus, Michael J
2014-03-01
Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.
How bandwidth selection algorithms impact exploratory data analysis using kernel density estimation.
Harpole, Jared K; Woods, Carol M; Rodebaugh, Thomas L; Levinson, Cheri A; Lenze, Eric J
2014-09-01
Exploratory data analysis (EDA) can reveal important features of underlying distributions, and these features often have an impact on inferences and conclusions drawn from data. Graphical analysis is central to EDA, and graphical representations of distributions often benefit from smoothing. A viable method of estimating and graphing the underlying density in EDA is kernel density estimation (KDE). This article provides an introduction to KDE and examines alternative methods for specifying the smoothing bandwidth in terms of their ability to recover the true density. We also illustrate the comparison and use of KDE methods with 2 empirical examples. Simulations were carried out in which we compared 8 bandwidth selection methods (Sheather-Jones plug-in [SJDP], normal rule of thumb, Silverman's rule of thumb, least squares cross-validation, biased cross-validation, and 3 adaptive kernel estimators) using 5 true density shapes (standard normal, positively skewed, bimodal, skewed bimodal, and standard lognormal) and 9 sample sizes (15, 25, 50, 75, 100, 250, 500, 1,000, 2,000). Results indicate that, overall, SJDP outperformed all methods. However, for smaller sample sizes (25 to 100) either biased cross-validation or Silverman's rule of thumb was recommended, and for larger sample sizes the adaptive kernel estimator with SJDP was recommended. Information is provided about implementing the recommendations in the R computing language. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng
2018-03-05
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods
NASA Astrophysics Data System (ADS)
Blatter, D. B.; Ray, A.; Key, K.
2017-12-01
Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.
Development, implementation and evaluation of satellite-aided agricultural monitoring systems
NASA Technical Reports Server (NTRS)
Cicone, R. (Principal Investigator); Crist, E.; Metzler, M.; Parris, T.
1982-01-01
Research supporting the use of remote sensing for inventory and assessment of agricultural commodities is summarized. Three task areas are described: (1) corn and soybean crop spectral/temporal signature characterization; (2) efficient area estimation technology development; and (3) advanced satellite and sensor system definition. Studies include an assessment of alternative green measures from MSS variables; the evaluation of alternative methods for identifying, labeling or classification targets in an automobile procedural context; a comparison of MSS, the advanced very high resolution radiometer and the coastal zone color scanner, as well as a critical assessment of thematic mapper dimensionally and spectral structure.
Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.
Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John
2018-03-01
Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.
Makeyev, Oleksandr; Besio, Walter G.
2016-01-01
Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933
Makeyev, Oleksandr; Besio, Walter G
2016-06-10
Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.
TAPAS: tools to assist the targeted protein quantification of human alternative splice variants.
Yang, Jae-Seong; Sabidó, Eduard; Serrano, Luis; Kiel, Christina
2014-10-15
In proteomes of higher eukaryotes, many alternative splice variants can only be detected by their shared peptides. This makes it highly challenging to use peptide-centric mass spectrometry to distinguish and to quantify protein isoforms resulting from alternative splicing events. We have developed two complementary algorithms based on linear mathematical models to efficiently compute a minimal set of shared and unique peptides needed to quantify a set of isoforms and splice variants. Further, we developed a statistical method to estimate the splice variant abundances based on stable isotope labeled peptide quantities. The algorithms and databases are integrated in a web-based tool, and we have experimentally tested the limits of our quantification method using spiked proteins and cell extracts. The TAPAS server is available at URL http://davinci.crg.es/tapas/. luis.serrano@crg.eu or christina.kiel@crg.eu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Latorre, Jorge; Llorens, Roberto; Colomer, Carolina; Alcañiz, Mariano
2018-04-27
Different studies have analyzed the potential of the off-the-shelf Microsoft Kinect, in its different versions, to estimate spatiotemporal gait parameters as a portable markerless low-cost alternative to laboratory grade systems. However, variability in populations, measures, and methodologies prevents accurate comparison of the results. The objective of this study was to determine and compare the reliability of the existing Kinect-based methods to estimate spatiotemporal gait parameters in healthy and post-stroke adults. Forty-five healthy individuals and thirty-eight stroke survivors participated in this study. Participants walked five meters at a comfortable speed and their spatiotemporal gait parameters were estimated from the data retrieved by a Kinect v2, using the most common methods in the literature, and by visual inspection of the videotaped performance. Errors between both estimations were computed. For both healthy and post-stroke participants, highest accuracy was obtained when using the speed of the ankles to estimate gait speed (3.6-5.5 cm/s), stride length (2.5-5.5 cm), and stride time (about 45 ms), and when using the distance between the sacrum and the ankles and toes to estimate double support time (about 65 ms) and swing time (60-90 ms). Although the accuracy of these methods is limited, these measures could occasionally complement traditional tools. Copyright © 2018 Elsevier Ltd. All rights reserved.
A default Bayesian hypothesis test for mediation.
Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan
2015-03-01
In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).
Estimation of seismic quality factor: Artificial neural networks and current approaches
NASA Astrophysics Data System (ADS)
Yıldırım, Eray; Saatçılar, Ruhi; Ergintav, Semih
2017-01-01
The aims of this study are to estimate soil attenuation using alternatives to traditional methods, to compare results of using these methods, and to examine soil properties using the estimated results. The performances of all methods, amplitude decay, spectral ratio, Wiener filter, and artificial neural network (ANN) methods, are examined on field and synthetic data with noise and without noise. High-resolution seismic reflection field data from Yeniköy (Arnavutköy, İstanbul) was used as field data, and 424 estimations of Q values were made for each method (1,696 total). While statistical tests on synthetic and field data are quite close to the Q value estimation results of ANN, Wiener filter, and spectral ratio methods, the amplitude decay methods showed a higher estimation error. According to previous geological and geophysical studies in this area, the soil is water-saturated, quite weak, consisting of clay and sandy units, and, because of current and past landslides in the study area and its vicinity, researchers reported heterogeneity in the soil. Under the same physical conditions, Q value calculated on field data can be expected to be 7.9 and 13.6. ANN models with various structures, training algorithm, input, and number of neurons are investigated. A total of 480 ANN models were generated consisting of 60 models for noise-free synthetic data, 360 models for different noise content synthetic data and 60 models to apply to the data collected in the field. The models were tested to determine the most appropriate structure and training algorithm. In the final ANN, the input vectors consisted of the difference of the width, energy, and distance of seismic traces, and the output was Q value. Success rate of both ANN methods with noise-free and noisy synthetic data were higher than the other three methods. Also according to the statistical tests on estimated Q value from field data, the method showed results that are more suitable. The Q value can be estimated practically and quickly by processing the traces with the recommended ANN model. Consequently, the ANN method could be used for estimating Q value from seismic data.
Piovesan, Davide; Pierobon, Alberto; DiZio, Paul; Lackner, James R
2012-01-01
This study presents and validates a Time-Frequency technique for measuring 2-dimensional multijoint arm stiffness throughout a single planar movement as well as during static posture. It is proposed as an alternative to current regressive methods which require numerous repetitions to obtain average stiffness on a small segment of the hand trajectory. The method is based on the analysis of the reassigned spectrogram of the arm's response to impulsive perturbations and can estimate arm stiffness on a trial-by-trial basis. Analytic and empirical methods are first derived and tested through modal analysis on synthetic data. The technique's accuracy and robustness are assessed by modeling the estimation of stiffness time profiles changing at different rates and affected by different noise levels. Our method obtains results comparable with two well-known regressive techniques. We also test how the technique can identify the viscoelastic component of non-linear and higher than second order systems with a non-parametrical approach. The technique proposed here is very impervious to noise and can be used easily for both postural and movement tasks. Estimations of stiffness profiles are possible with only one perturbation, making our method a useful tool for estimating limb stiffness during motor learning and adaptation tasks, and for understanding the modulation of stiffness in individuals with neurodegenerative diseases.
Estimating Tool–Tissue Forces Using a 3-Degree-of-Freedom Robotic Surgical Tool
Zhao, Baoliang; Nelson, Carl A.
2016-01-01
Robot-assisted minimally invasive surgery (MIS) has gained popularity due to its high dexterity and reduced invasiveness to the patient; however, due to the loss of direct touch of the surgical site, surgeons may be prone to exert larger forces and cause tissue damage. To quantify tool–tissue interaction forces, researchers have tried to attach different kinds of sensors on the surgical tools. This sensor attachment generally makes the tools bulky and/or unduly expensive and may hinder the normal function of the tools; it is also unlikely that these sensors can survive harsh sterilization processes. This paper investigates an alternative method by estimating tool–tissue interaction forces using driving motors' current, and validates this sensorless force estimation method on a 3-degree-of-freedom (DOF) robotic surgical grasper prototype. The results show that the performance of this method is acceptable with regard to latency and accuracy. With this tool–tissue interaction force estimation method, it is possible to implement force feedback on existing robotic surgical systems without any sensors. This may allow a haptic surgical robot which is compatible with existing sterilization methods and surgical procedures, so that the surgeon can obtain tool–tissue interaction forces in real time, thereby increasing surgical efficiency and safety. PMID:27303591
Missing data and multiple imputation in clinical epidemiological research.
Pedersen, Alma B; Mikkelsen, Ellen M; Cronin-Fenton, Deirdre; Kristensen, Nickolaj R; Pham, Tra My; Pedersen, Lars; Petersen, Irene
2017-01-01
Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data.
Missing data and multiple imputation in clinical epidemiological research
Pedersen, Alma B; Mikkelsen, Ellen M; Cronin-Fenton, Deirdre; Kristensen, Nickolaj R; Pham, Tra My; Pedersen, Lars; Petersen, Irene
2017-01-01
Missing data are ubiquitous in clinical epidemiological research. Individuals with missing data may differ from those with no missing data in terms of the outcome of interest and prognosis in general. Missing data are often categorized into the following three types: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). In clinical epidemiological research, missing data are seldom MCAR. Missing data can constitute considerable challenges in the analyses and interpretation of results and can potentially weaken the validity of results and conclusions. A number of methods have been developed for dealing with missing data. These include complete-case analyses, missing indicator method, single value imputation, and sensitivity analyses incorporating worst-case and best-case scenarios. If applied under the MCAR assumption, some of these methods can provide unbiased but often less precise estimates. Multiple imputation is an alternative method to deal with missing data, which accounts for the uncertainty associated with missing data. Multiple imputation is implemented in most statistical software under the MAR assumption and provides unbiased and valid estimates of associations based on information from the available data. The method affects not only the coefficient estimates for variables with missing data but also the estimates for other variables with no missing data. PMID:28352203
Estimating Tool-Tissue Forces Using a 3-Degree-of-Freedom Robotic Surgical Tool.
Zhao, Baoliang; Nelson, Carl A
2016-10-01
Robot-assisted minimally invasive surgery (MIS) has gained popularity due to its high dexterity and reduced invasiveness to the patient; however, due to the loss of direct touch of the surgical site, surgeons may be prone to exert larger forces and cause tissue damage. To quantify tool-tissue interaction forces, researchers have tried to attach different kinds of sensors on the surgical tools. This sensor attachment generally makes the tools bulky and/or unduly expensive and may hinder the normal function of the tools; it is also unlikely that these sensors can survive harsh sterilization processes. This paper investigates an alternative method by estimating tool-tissue interaction forces using driving motors' current, and validates this sensorless force estimation method on a 3-degree-of-freedom (DOF) robotic surgical grasper prototype. The results show that the performance of this method is acceptable with regard to latency and accuracy. With this tool-tissue interaction force estimation method, it is possible to implement force feedback on existing robotic surgical systems without any sensors. This may allow a haptic surgical robot which is compatible with existing sterilization methods and surgical procedures, so that the surgeon can obtain tool-tissue interaction forces in real time, thereby increasing surgical efficiency and safety.
ERIC Educational Resources Information Center
Duncombe, William; Yinger, John
This policy brief explains why performance focus and educational cost indexes must go hand in hand, discusses alternative methods for estimating educational cost indexes, and shows how these costs indexes can be incorporated into a performance-based state aid program. A shift to educational performance standards, whether these standards are…
Aaron Weiskittel; Jereme Frank; David Walker; Phil Radtke; David Macfarlane; James Westfall
2015-01-01
Prediction of forest biomass and carbon is becoming important issues in the United States. However, estimating forest biomass and carbon is difficult and relies on empirically-derived regression equations. Based on recent findings from a national gap analysis and comprehensive assessment of the USDA Forest Service Forest Inventory and Analysis (USFS-FIA) component...
Bayesian Hypothesis Testing for Psychologists: A Tutorial on the Savage-Dickey Method
ERIC Educational Resources Information Center
Wagenmakers, Eric-Jan; Lodewyckx, Tom; Kuriyal, Himanshu; Grasman, Raoul
2010-01-01
In the field of cognitive psychology, the "p"-value hypothesis test has established a stranglehold on statistical reporting. This is unfortunate, as the "p"-value provides at best a rough estimate of the evidence that the data provide for the presence of an experimental effect. An alternative and arguably more appropriate measure of evidence is…
Direct seeding of shortleaf pine
Corinne S. Mann; David Gwaze
2007-01-01
Direct seeding is a potentially viable method for regenerating shortleaf pine, but it has not been used extensively. In Missouri, an estimated 10,000 acres have been direct-seeded with shortleaf pine; half of which are at Mark Twain National Forest. Direct seeding offers a flexible and efficient alternative to planting as a way to restore shortleaf pine in the Ozarks....
ERIC Educational Resources Information Center
Warkentien, Siri; Silver, David
2016-01-01
Public schools with impressive records of serving lower-performing students are often overlooked because their average test scores, even when students are growing quickly, are lower than scores in schools that serve higher-performing students. Schools may appear to be doing poorly either because baseline achievement is not easily accounted for or…
Length and Rate of Individual Participation in Various Activities on Recreation Sites and Areas
Gary L. Tyre; George A. James
1971-01-01
While statistically reliable methods exist for estimating recreation use on large areas, they often prove prohibitively expensive. Inexpensive alternatives involving the length and rate of individual participation in specific activites are presented, together with data and statistics on the recreational use of three large areas on the National Forests. This...
Chapman, Cole G; Brooks, John M
2016-12-01
To examine the settings of simulation evidence supporting use of nonlinear two-stage residual inclusion (2SRI) instrumental variable (IV) methods for estimating average treatment effects (ATE) using observational data and investigate potential bias of 2SRI across alternative scenarios of essential heterogeneity and uniqueness of marginal patients. Potential bias of linear and nonlinear IV methods for ATE and local average treatment effects (LATE) is assessed using simulation models with a binary outcome and binary endogenous treatment across settings varying by the relationship between treatment effectiveness and treatment choice. Results show that nonlinear 2SRI models produce estimates of ATE and LATE that are substantially biased when the relationships between treatment and outcome for marginal patients are unique from relationships for the full population. Bias of linear IV estimates for LATE was low across all scenarios. Researchers are increasingly opting for nonlinear 2SRI to estimate treatment effects in models with binary and otherwise inherently nonlinear dependent variables, believing that it produces generally unbiased and consistent estimates. This research shows that positive properties of nonlinear 2SRI rely on assumptions about the relationships between treatment effect heterogeneity and choice. © Health Research and Educational Trust.
Population trends from the North American Breeding Bird Survey
Peterjohn, B.G.; Sauer, J.R.; Robbins, C.S.; Martin, Thomas E.; Finch, Deborah M.
1995-01-01
INTRODUCTION: Most Neotropical migrant birds are difficult to count accurately and are moderately common over large breeding distributions. Consequently, little historical information exists on their large-scale population changes, and most of this information is anecdotal. Surveys begun in this century such as Breeding Bird Censuses and Christmas Bird Counts have the potential to provide this information, but only the North American Breeding Bird Survey (BBS) achieves the extensive continental coverage necessary to document population changes for most Neotropical migrant birds. Conservationists and ecologists have begun to use BBS data to estimate population trends, but there is still widespread confusion over exactly what these data show regarding population changes. In this chapter, we review the current state of knowledge regarding population changes in Neotropical migrant birds and the methods used to analyze these changes. The primary emphasis is on the BBS (Robbins et al. 1986) because this survey provides the best available data for estimating trends of Neotropical migrants on a continental scale. To address questions about methods of analyzing survey data, we review and compare some alternative methods of analyzing BBS data. We also discuss the effectiveness of the BBS in sampling Neotropical migrant species, and review possibilities for use of alternative data sets to verify trends from the BBS.
Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics
Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier
2013-01-01
Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528
Gowda, Charitha; Dong, Shiming; Potter, Rachel C; Dombkowski, Kevin J; Stokley, Shannon; Dempsey, Amanda F
2013-01-01
Immunization information systems (IISs) are valuable surveillance tools; however, population relocation may introduce bias when determining immunization coverage. We explored alternative methods for estimating the vaccine-eligible population when calculating adolescent immunization levels using a statewide IIS. We performed a retrospective analysis of the Michigan State Care Improvement Registry (MCIR) for all adolescents aged 11-18 years registered in the MCIR as of October 2010. We explored four methods for determining denominators: (1) including all adolescents with MCIR records, (2) excluding adolescents with out-of-state residence, (3) further excluding those without MCIR activity ≥ 10 years prior to the evaluation date, and (4) using a denominator based on U.S. Census data. We estimated state- and county-specific coverage levels for four adolescent vaccines. We found a 20% difference in estimated vaccination coverage between the most inclusive and restrictive denominator populations. Although there was some variability among the four methods in vaccination at the state level (2%-11%), greater variation occurred at the county level (up to 21%). This variation was substantial enough to potentially impact public health assessments of immunization programs. Generally, vaccines with higher coverage levels had greater absolute variation, as did counties with smaller populations. At the county level, using the four denominator calculation methods resulted in substantial differences in estimated adolescent immunization rates that were less apparent when aggregated at the state level. Further research is needed to ascertain the most appropriate method for estimating vaccine coverage levels using IIS data.
An extensible framework for capturing solvent effects in computer generated kinetic models.
Jalan, Amrit; West, Richard H; Green, William H
2013-03-14
Detailed kinetic models provide useful mechanistic insight into a chemical system. Manual construction of such models is laborious and error-prone, which has led to the development of automated methods for exploring chemical pathways. These methods rely on fast, high-throughput estimation of species thermochemistry and kinetic parameters. In this paper, we present a methodology for extending automatic mechanism generation to solution phase systems which requires estimation of solvent effects on reaction rates and equilibria. The linear solvation energy relationship (LSER) method of Abraham and co-workers is combined with Mintz correlations to estimate ΔG(solv)°(T) in over 30 solvents using solute descriptors estimated from group additivity. Simple corrections are found to be adequate for the treatment of radical sites, as suggested by comparison with known experimental data. The performance of scaled particle theory expressions for enthalpic-entropic decomposition of ΔG(solv)°(T) is also presented along with the associated computational issues. Similar high-throughput methods for solvent effects on free-radical kinetics are only available for a handful of reactions due to lack of reliable experimental data, and continuum dielectric calculations offer an alternative method for their estimation. For illustration, we model liquid phase oxidation of tetralin in different solvents computing the solvent dependence for ROO• + ROO• and ROO• + solvent reactions using polarizable continuum quantum chemistry methods. The resulting kinetic models show an increase in oxidation rate with solvent polarity, consistent with experiment. Further work needed to make this approach more generally useful is outlined.
Grandchamp, Romain; Delorme, Arnaud
2011-01-01
In electroencephalography, the classical event-related potential model often proves to be a limited method to study complex brain dynamics. For this reason, spectral techniques adapted from signal processing such as event-related spectral perturbation (ERSP) – and its variant event-related synchronization and event-related desynchronization – have been used over the past 20 years. They represent average spectral changes in response to a stimulus. These spectral methods do not have strong consensus for comparing pre- and post-stimulus activity. When computing ERSP, pre-stimulus baseline removal is usually performed after averaging the spectral estimate of multiple trials. Correcting the baseline of each single-trial prior to averaging spectral estimates is an alternative baseline correction method. However, we show that this method leads to positively skewed post-stimulus ERSP values. We eventually present new single-trial-based ERSP baseline correction methods that perform trial normalization or centering prior to applying classical baseline correction methods. We show that single-trial correction methods minimize the contribution of artifactual data trials with high-amplitude spectral estimates and are robust to outliers when performing statistical inference testing. We then characterize these methods in terms of their time–frequency responses and behavior compared to classical ERSP methods. PMID:21994498
Estimating plant available water content from remotely sensed evapotranspiration
NASA Astrophysics Data System (ADS)
van Dijk, A. I. J. M.; Warren, G.; Doody, T.
2012-04-01
Plant available water content (PAWC) is an emergent soil property that is a critical variable in hydrological modelling. PAWC determines the active soil water storage and, in water-limited environments, is the main cause of different ecohydrological behaviour between (deep-rooted) perennial vegetation and (shallow-rooted) seasonal vegetation. Conventionally, PAWC is estimated for a combination of soil and vegetation from three variables: maximum rooting depth and the volumetric water content at field capacity and permanent wilting point, respectively. Without elaborate local field observation, large uncertainties in PAWC occur due to the assumptions associated with each of the three variables. We developed an alternative, observation-based method to estimate PAWC from precipitation observations and CSIRO MODIS Reflectance-based Evapotranspiration (CMRSET) estimates. Processing steps include (1) removing residual systematic bias in the CMRSET estimates, (2) making spatially appropriate assumptions about local water inputs and surface runoff losses, (3) using mean seasonal patterns in precipitation and CMRSET to estimate the seasonal pattern in soil water storage changes, (4) from these, calculating the mean seasonal storage range, which can be treated as an estimate of PAWC. We evaluate the resulting PAWC estimates against those determined in field experiments for 180 sites across Australia. We show that the method produces better estimates of PAWC than conventional techniques. In addition, the method provides detailed information with full continental coverage at moderate resolution (250 m) scale. The resulting maps can be used to identify likely groundwater dependent ecosystems and to derive PAWC distributions for each combination of soil and vegetation type.
Analysis of Synchronization Phenomena in Broadband Signals with Nonlinear Excitable Media
NASA Astrophysics Data System (ADS)
Chernihovskyi, Anton; Elger, Christian E.; Lehnertz, Klaus
2009-12-01
We apply the method of frequency-selective excitation waves in excitable media to characterize synchronization phenomena in interacting complex dynamical systems by measuring coincidence rates of induced excitations. We relax the frequency-selectivity of excitable media and demonstrate two applications of the method to signals with broadband spectra. Findings obtained from analyzing time series of coupled chaotic oscillators as well as electroencephalographic (EEG) recordings from an epilepsy patient indicate that this method can provide an alternative and complementary way to estimate the degree of phase synchronization in noisy signals.
Fast function-on-scalar regression with penalized basis expansions.
Reiss, Philip T; Huang, Lei; Mennes, Maarten
2010-01-01
Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990 s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals for the coefficient functions, simultaneous inference by permutation tests, and model selection, including a novel notion of pointwise model selection. P-OLS and P-GLS are compared in a simulation study. Our methods are illustrated with an analysis of age effects in a functional magnetic resonance imaging data set, as well as a reanalysis of a now-classic Canadian weather data set. An R package implementing the methods is publicly available.
Robust w-Estimators for Cryo-EM Class Means
Huang, Chenxi; Tagare, Hemant D.
2016-01-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397
Dommert, M; Reginatto, M; Zboril, M; Fiedler, F; Helmbrecht, S; Enghardt, W; Lutz, B
2017-11-28
Bonner sphere measurements are typically analyzed using unfolding codes. It is well known that it is difficult to get reliable estimates of uncertainties for standard unfolding procedures. An alternative approach is to analyze the data using Bayesian parameter estimation. This method provides reliable estimates of the uncertainties of neutron spectra leading to rigorous estimates of uncertainties of the dose. We extend previous Bayesian approaches and apply the method to stray neutrons in proton therapy environments by introducing a new parameterized model which describes the main features of the expected neutron spectra. The parameterization is based on information that is available from measurements and detailed Monte Carlo simulations. The validity of this approach has been validated with results of an experiment using Bonner spheres carried out at the experimental hall of the OncoRay proton therapy facility in Dresden. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
SEE rate estimation based on diffusion approximation of charge collection
NASA Astrophysics Data System (ADS)
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
NASA Technical Reports Server (NTRS)
Nielsen, Jack N; Kaattari, George E; Drake, William C
1952-01-01
A simple method is presented for estimating lift, pitching-moment, and hinge-moment characteristics of all-movable wings in the presence of a body as well as the characteristics of wing-body combinations employing such wings. In general, good agreement between the method and experiment was obtained for the lift and pitching moment of the entire wing-body combination and for the lift of the wing in the presence of the body. The method is valid for moderate angles of attack, wing deflection angles, and width of gap between wing and body. The method of estimating hinge moment was not considered sufficiently accurate for triangular all-movable wings. An alternate procedure is proposed based on the experimental moment characteristics of the wing alone. Further theoretical and experimental work is required to substantiate fully the proposed procedure.
Methods for characterizing subsurface volatile contaminants using in-situ sensors
Ho, Clifford K [Albuquerque, NM
2006-02-21
An inverse analysis method for characterizing diffusion of vapor from an underground source of volatile contaminant using data taken by an in-situ sensor. The method uses one-dimensional solutions to the diffusion equation in Cartesian, cylindrical, or spherical coordinates for isotropic and homogenous media. If the effective vapor diffusion coefficient is known, then the distance from the source to the in-situ sensor can be estimated by comparing the shape of the predicted time-dependent vapor concentration response curve to the measured response curve. Alternatively, if the source distance is known, then the effective vapor diffusion coefficient can be estimated using the same inverse analysis method. A triangulation technique can be used with multiple sensors to locate the source in two or three dimensions. The in-situ sensor can contain one or more chemiresistor elements housed in a waterproof enclosure with a gas permeable membrane.
The surface renewal method for better spatial resolution of evapotranspiration measurements
NASA Astrophysics Data System (ADS)
Suvocarev, K.; Fischer, M.; Massey, J. H.; Reba, M. L.; Runkle, B.
2017-12-01
Evaluating feasible irrigation strategies when water is scarce requires measurements or estimations of evapotranspiration (ET). Direct observations of ET from agricultural fields are preferred, and micrometeorological methods such as eddy covariance (EC) provide a high quality, continuous time series of ET. However, when replicates of the measurements are needed to compare irrigation strategies, the cost of such experiments is often prohibitive and limits experimental scope. An alternative micrometeorological approach to ET, the surface renewal (SR) method, may be reduced to a thermocouple and a propeller anemometer (Castellvi and Snyder, 2009). In this case, net radiation, soil and sensible heat flux (H) are measured and latent heat flux (an energy equivalent for ET) is estimated as the residual of the surface energy-balance equation. In our experiment, thermocouples (Type E Fine-Wire Thermocouple, FW3) were deployed next to the EC system and combined with mean horizontal wind speed measurements to obtain H using SR method for three weeks. After compensating the temperature signal for non-ideal frequency response in the wavelet half-plane and correcting the sonic anemometer for the flow distortion (Horst et al., 2015), the SR H fluxes compared well to those measured by EC (r2 = 0.9, slope = 0.92). This result encouraged us to install thermocouples over 16 rice fields under different irrigation treatments (continuous cascade flood, continuous multiple inlet rice irrigation, alternate wetting and drying, and furrow irrigation). The EC measurements with net radiometer and soil heat flux plates are deployed at three of these fields to provide a direct comparison. The measurement campaign will finish soon and the data will be processed to evaluate the SR approach for ET estimation. The results will be used to show better spatial resolution of ET measurements to support irrigation decisions in agricultural crops.
NASA Astrophysics Data System (ADS)
Flores, Andrés; Wiff, Rodrigo; Díaz, Eduardo; Carvajal, Bernardita
2017-08-01
Fecundity is a key aspect of fish species reproductive biology because it relates directly to total egg production. Yet, despite such importance, fecundity estimates are lacking or scarce for several fish species. The gravimetric method is the most-used one to estimate fecundity by essentially scaling up the oocyte density to the ovary weight. It is a relatively simple and precise technique, but also time consuming because it requires counting all oocytes in an ovary subsample. The auto-diametric method, on the other hand, is relatively new for estimating fecundity, representing a rapid alternative, because it requires only an estimation of mean oocyte density from mean oocyte diameter. Using the extensive database available from commercial fishery and design surveys for southern blue whiting Micromesistius australis australis in the Southeast Pacific, we compared estimates of fecundity using both gravimetric and auto-diametric methods. Temporal variations in potential fecundity from the auto-diametric method were evaluated using generalised linear models considering predictors from maternal characteristics such as female size, condition factor, oocyte size, and gonadosomatic index. A global and time-invariant auto-diametric equation was evaluated using a simulation procedure based on non-parametric bootstrap. Results indicated there were not significant differences regarding fecundity estimates between the gravimetric and auto-diametric method (p > 0.05). Simulation showed the application of a global equation is unbiased and sufficiently precise to estimate time-invariant fecundity of this species. Temporal variations on fecundity were explained by maternal characteristic, revealing signals of fecundity down-regulation. We discuss how oocyte size and nutritional condition (measured as condition factor) are one of the important factors determining fecundity. We highlighted also the relevance of choosing the appropriate sampling period to conduct maturity studies and ensure precise estimates of fecundity of this species.
Establishment of a center of excellence for applied mathematical and statistical research
NASA Technical Reports Server (NTRS)
Woodward, W. A.; Gray, H. L.
1983-01-01
The state of the art was assessed with regards to efforts in support of the crop production estimation problem and alternative generic proportion estimation techniques were investigated. Topics covered include modeling the greeness profile (Badhwarmos model), parameter estimation using mixture models such as CLASSY, and minimum distance estimation as an alternative to maximum likelihood estimation. Approaches to the problem of obtaining proportion estimates when the underlying distributions are asymmetric are examined including the properties of Weibull distribution.
NURD: an implementation of a new method to estimate isoform expression from non-uniform RNA-seq data
2013-01-01
Background RNA-Seq technology has been used widely in transcriptome study, and one of the most important applications is to estimate the expression level of genes and their alternative splicing isoforms. There have been several algorithms published to estimate the expression based on different models. Recently Wu et al. published a method that can accurately estimate isoform level expression by considering position-related sequencing biases using nonparametric models. The method has advantages in handling different read distributions, but there hasn’t been an efficient program to implement this algorithm. Results We developed an efficient implementation of the algorithm in the program NURD. It uses a binary interval search algorithm. The program can correct both the global tendency of sequencing bias in the data and local sequencing bias specific to each gene. The correction makes the isoform expression estimation more reliable under various read distributions. And the implementation is computationally efficient in both the memory cost and running time and can be readily scaled up for huge datasets. Conclusion NURD is an efficient and reliable tool for estimating the isoform expression level. Given the reads mapping result and gene annotation file, NURD will output the expression estimation result. The package is freely available for academic use at http://bioinfo.au.tsinghua.edu.cn/software/NURD/. PMID:23837734
Comparison of three techniques for estimating the forage intake of lactating dairy cows on pasture.
Macoon, B; Sollenberger, L E; Moore, J E; Staples, C R; Fike, J H; Portier, K M
2003-09-01
Quantifying DMI is necessary for estimation of nutrient consumption by ruminants, but it is inherently difficult on grazed pastures and even more so when supplements are fed. Our objectives were to compare three methods of estimating forage DMI (inference from animal performance, evaluation from fecal output using a pulse-dose marker, and estimation from herbage disappearance methods) and to identify the most useful approach or combination of approaches for estimating pasture intake by lactating dairy cows. During three continuous 28-d periods in the winter season, Holstein cows (Bos taurus; n = 32) grazed a cool-season grass or a cool-season grass-clover mixture at two stocking rates (SR; 5 vs. 2.5 cows/ha) and were fed two rates of concentrate supplementation (CS; 1 kg of concentrate [as-fed] per 2.5 or 3.5 kg of milk produced). Animal response data used in computations for the animal performance method were obtained from the latter 14 d of each period. For the pulse-dose marker method, chromium-mordanted fiber was used. Pasture sampling to determine herbage disappearance was done weekly throughout the study. Forage DMI estimated by the animal performance method was different among periods (P < 0.001; 6.5, 6.4, and 9.6 kg/d for Periods 1, 2, and 3, respectively), between SR (P < 0.001; 8.7 [low SR] vs. 6.3 kg/d [high SR]) and between CS (P < 0.01; 8.4 [low CS] vs. 6.6 kg/d [high CS]). The period and SR effect seemed to be related to forage mass. The pulse-dose marker method generally provided greater estimates of forage DMI (as much as 11.0 kg/d more than the animal performance method) and was not correlated with the other methods. Estimates of forage DMI by the herbage disappearance method were correlated with the animal performance method. The difference between estimates from these two methods, ranging from -4.7 to 5.4 kg/d, were much lower than their difference from pulse-dose marker estimates. The results of this study suggest that, when appropriate for the research objectives, the animal performance or herbage disappearance methods may be useful and less costly alternatives to using the pulse-dose method.
Supplying the energy and fiber needs of dairy cows from alternate feed sources.
Coppock, C E
1987-05-01
Alternate feeds are a major resource of the dairy industry. The major issue involving them is a method to predict accurately nutritive value from laboratory analyses. Variation in nutrient content of most alternate feeds is greater than in feed grains. Another issue is which depression factors to use in adjusting values for TDN from maintenance to production intakes. The NRC uses an average depression of 8% for all feeds; others think each feedstuff should be depressed individually, and discount factors have been proposed. For some alternate feeds, large differences in net energy estimates occur. Neutral detergent fiber has been proposed as an indicator of productive energy, but it has several deficiencies with alternate feeds high in fat, molasses, or ash. A summative equation based on fat, ash, protein, NDF, and lignin has wider application for predicting NE1 for all feeds. A roughage value index reflects a feed's property to stimulate chewing and rumination. Its use has special relevance for alternate feeds with small particle sizes, which may induce little chewing. Supplemental fat may increase the metabolizable energy converted to milk, but respiration experiments are needed.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Zee, Jarcy; Xie, Sharon X.
2015-01-01
Summary When a true survival endpoint cannot be assessed for some subjects, an alternative endpoint that measures the true endpoint with error may be collected, which often occurs when obtaining the true endpoint is too invasive or costly. We develop an estimated likelihood function for the situation where we have both uncertain endpoints for all participants and true endpoints for only a subset of participants. We propose a nonparametric maximum estimated likelihood estimator of the discrete survival function of time to the true endpoint. We show that the proposed estimator is consistent and asymptotically normal. We demonstrate through extensive simulations that the proposed estimator has little bias compared to the naïve Kaplan-Meier survival function estimator, which uses only uncertain endpoints, and more efficient with moderate missingness compared to the complete-case Kaplan-Meier survival function estimator, which uses only available true endpoints. Finally, we apply the proposed method to a dataset for estimating the risk of developing Alzheimer's disease from the Alzheimer's Disease Neuroimaging Initiative. PMID:25916510
An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.
Obuchowski, Nancy A
2006-02-15
ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.
Comparison of machine-learning methods for above-ground biomass estimation based on Landsat imagery
NASA Astrophysics Data System (ADS)
Wu, Chaofan; Shen, Huanhuan; Shen, Aihua; Deng, Jinsong; Gan, Muye; Zhu, Jinxia; Xu, Hongwei; Wang, Ke
2016-07-01
Biomass is one significant biophysical parameter of a forest ecosystem, and accurate biomass estimation on the regional scale provides important information for carbon-cycle investigation and sustainable forest management. In this study, Landsat satellite imagery data combined with field-based measurements were integrated through comparisons of five regression approaches [stepwise linear regression, K-nearest neighbor, support vector regression, random forest (RF), and stochastic gradient boosting] with two different candidate variable strategies to implement the optimal spatial above-ground biomass (AGB) estimation. The results suggested that RF algorithm exhibited the best performance by 10-fold cross-validation with respect to R2 (0.63) and root-mean-square error (26.44 ton/ha). Consequently, the map of estimated AGB was generated with a mean value of 89.34 ton/ha in northwestern Zhejiang Province, China, with a similar pattern to the distribution mode of local forest species. This research indicates that machine-learning approaches associated with Landsat imagery provide an economical way for biomass estimation. Moreover, ensemble methods using all candidate variables, especially for Landsat images, provide an alternative for regional biomass simulation.
A comparison of approaches for estimating bottom-sediment mass in large reservoirs
Juracek, Kyle E.
2006-01-01
Estimates of sediment and sediment-associated constituent loads and yields from drainage basins are necessary for the management of reservoir-basin systems to address important issues such as reservoir sedimentation and eutrophication. One method for the estimation of loads and yields requires a determination of the total mass of sediment deposited in a reservoir. This method involves a sediment volume-to-mass conversion using bulk-density information. A comparison of four computational approaches (partition, mean, midpoint, strategic) for using bulk-density information to estimate total bottom-sediment mass in four large reservoirs indicated that the differences among the approaches were not statistically significant. However, the lack of statistical significance may be a result of the small sample size. Compared to the partition approach, which was presumed to provide the most accurate estimates of bottom-sediment mass, the results achieved using the strategic, mean, and midpoint approaches differed by as much as ?4, ?20, and ?44 percent, respectively. It was concluded that the strategic approach may merit further investigation as a less time consuming and less costly alternative to the partition approach.
Likelihood-Based Random-Effect Meta-Analysis of Binary Events.
Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D
2015-01-01
Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.
The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions
Larget, Bret
2013-01-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066
Streamflow characteristics related to channel geometry of streams in western United States
Hedman, E.R.; Osterkamp, W.R.
1982-01-01
Assessment of surface-mining and reclamation activities generally requires extensive hydrologic data. Adequate streamflow data from instrumented gaging stations rarely are available, and estimates of surface- water discharge based on rainfall-runoff models, drainage area, and basin characteristics sometimes have proven unreliable. Channel-geometry measurements offer an alternative method of quickly and inexpensively estimating stream-flow characteristics for ungaged streams. The method uses the empirical development of equations to yield a discharge value from channel-geometry and channel-material data. The equations are developed by collecting data at numerous streamflow-gaging sites and statistically relating those data to selected discharge characteristics. Mean annual runoff and flood discharges with selected recurrence intervals can be estimated for perennial, intermittent, and ephemeral streams. The equations were developed from data collected in the western one-half of the conterminous United States. The effect of the channel-material and runoff characteristics are accounted for with the equations.
An efficient quantum algorithm for spectral estimation
NASA Astrophysics Data System (ADS)
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
NASA Astrophysics Data System (ADS)
Van Zeebroeck, M.; Tijskens, E.; Van Liedekerke, P.; Deli, V.; De Baerdemaeker, J.; Ramon, H.
2003-09-01
A pendulum device has been developed to measure contact force, displacement and displacement rate of an impactor during its impact on the sample. Displacement, classically measured by double integration of an accelerometer, was determined in an alternative way using a more accurate incremental optical encoder. The parameters of the Kuwabara-Kono contact force model for impact of spheres have been estimated using an optimization method, taking the experimentally measured displacement, displacement rate and contact force into account. The accuracy of the method was verified using a rubber ball. Contact force parameters for the Kuwabara-Kono model have been estimated with success for three biological materials, i.e., apples, tomatoes and potatoes. The variability in the parameter estimations for the biological materials was quite high and can be explained by geometric differences (radius of curvature) and by biological variation of mechanical tissue properties.
DENSITY: software for analysing capture-recapture data from passive detector arrays
Efford, M.G.; Dawson, D.K.; Robbins, C.S.
2004-01-01
A general computer-intensive method is described for fitting spatial detection functions to capture-recapture data from arrays of passive detectors such as live traps and mist nets. The method is used to estimate the population density of 10 species of breeding birds sampled by mist-netting in deciduous forest at Patuxent Research Refuge, Laurel, Maryland, U.S.A., from 1961 to 1972. Total density (9.9 ? 0.6 ha-1 mean ? SE) appeared to decline over time (slope -0.41 ? 0.15 ha-1y-1). The mean precision of annual estimates for all 10 species pooled was acceptable (CV(D) = 14%). Spatial analysis of closed-population capture-recapture data highlighted deficiencies in non-spatial methodologies. For example, effective trapping area cannot be assumed constant when detection probability is variable. Simulation may be used to evaluate alternative designs for mist net arrays where density estimation is a study goal.
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
OSA severity assessment based on sleep breathing analysis using ambient microphone.
Dafna, E; Tarasiuk, A; Zigel, Y
2013-01-01
In this paper, an audio-based system for severity estimation of obstructive sleep apnea (OSA) is proposed. The system estimates the apnea-hypopnea index (AHI), which is the average number of apneic events per hour of sleep. This system is based on a Gaussian mixture regression algorithm that was trained and validated on full-night audio recordings. Feature selection process using a genetic algorithm was applied to select the best features extracted from time and spectra domains. A total of 155 subjects, referred to in-laboratory polysomnography (PSG) study, were recruited. Using the PSG's AHI score as a gold-standard, the performances of the proposed system were evaluated using a Pearson correlation, AHI error, and diagnostic agreement methods. Correlation of R=0.89, AHI error of 7.35 events/hr, and diagnostic agreement of 77.3% were achieved, showing encouraging performances and a reliable non-contact alternative method for OSA severity estimation.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Brooks, John M; Chapman, Cole G; Schroeder, Mary C
2018-06-01
Patient-centred care requires evidence of treatment effects across many outcomes. Outcomes can be beneficial (e.g. increased survival or cure rates) or detrimental (e.g. adverse events, pain associated with treatment, treatment costs, time required for treatment). Treatment effects may also be heterogeneous across outcomes and across patients. Randomized controlled trials are usually insufficient to supply evidence across outcomes. Observational data analysis is an alternative, with the caveat that the treatments observed are choices. Real-world treatment choice often involves complex assessment of expected effects across the array of outcomes. Failure to account for this complexity when interpreting treatment effect estimates could lead to clinical and policy mistakes. Our objective was to assess the properties of treatment effect estimates based on choice when treatments have heterogeneous effects on both beneficial and detrimental outcomes across patients. Simulation methods were used to highlight the sensitivity of treatment effect estimates to the distributions of treatment effects across patients across outcomes. Scenarios with alternative correlations between benefit and detriment treatment effects across patients were used. Regression and instrumental variable estimators were applied to the simulated data for both outcomes. True treatment effect parameters are sensitive to the relationships of treatment effectiveness across outcomes in each study population. In each simulation scenario, treatment effect estimate interpretations for each outcome are aligned with results shown previously in single outcome models, but these estimates vary across simulated populations with the correlations of treatment effects across patients across outcomes. If estimator assumptions are valid, estimates across outcomes can be used to assess the optimality of treatment rates in a study population. However, because true treatment effect parameters are sensitive to correlations of treatment effects across outcomes, decision makers should be cautious about generalizing estimates to other populations.
LeDell, Erin; Petersen, Maya; van der Laan, Mark
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.
Petersen, Maya; van der Laan, Mark
2015-01-01
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737
Tarone, Aaron M; Foran, David R
2011-01-01
Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.
Le Vu, Stéphane; Ratmann, Oliver; Delpech, Valerie; Brown, Alison E; Gill, O Noel; Tostevin, Anna; Fraser, Christophe; Volz, Erik M
2018-06-01
Phylogenetic clustering of HIV sequences from a random sample of patients can reveal epidemiological transmission patterns, but interpretation is hampered by limited theoretical support and statistical properties of clustering analysis remain poorly understood. Alternatively, source attribution methods allow fitting of HIV transmission models and thereby quantify aspects of disease transmission. A simulation study was conducted to assess error rates of clustering methods for detecting transmission risk factors. We modeled HIV epidemics among men having sex with men and generated phylogenies comparable to those that can be obtained from HIV surveillance data in the UK. Clustering and source attribution approaches were applied to evaluate their ability to identify patient attributes as transmission risk factors. We find that commonly used methods show a misleading association between cluster size or odds of clustering and covariates that are correlated with time since infection, regardless of their influence on transmission. Clustering methods usually have higher error rates and lower sensitivity than source attribution method for identifying transmission risk factors. But neither methods provide robust estimates of transmission risk ratios. Source attribution method can alleviate drawbacks from phylogenetic clustering but formal population genetic modeling may be required to estimate quantitative transmission risk factors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Adaptive Importance Sampling for Control and Inference
NASA Astrophysics Data System (ADS)
Kappen, H. J.; Ruiz, H. C.
2016-03-01
Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.
Harwell, Glenn R.
2012-01-01
Organizations responsible for the management of water resources, such as the U.S. Army Corps of Engineers (USACE), are tasked with estimation of evaporation for water-budgeting and planning purposes. The USACE has historically used Class A pan evaporation data (pan data) to estimate evaporation from reservoirs but many USACE Districts have been experimenting with other techniques for an alternative to collecting pan data. The energy-budget method generally is considered the preferred method for accurate estimation of open-water evaporation from lakes and reservoirs. Complex equations to estimate evaporation, such as the Penman, DeBruin-Keijman, and Priestley-Taylor, perform well when compared with energy-budget method estimates when all of the important energy terms are included in the equations and ideal data are collected. However, sometimes nonideal data are collected and energy terms, such as the change in the amount of stored energy and advected energy, are not included in the equations. When this is done, the corresponding errors in evaporation estimates are not quantifiable. Much simpler methods, such as the Hamon method and a method developed by the U.S. Weather Bureau (USWB) (renamed the National Weather Service in 1970), have been shown to provide reasonable estimates of evaporation when compared to energy-budget method estimates. Data requirements for the Hamon and USWB methods are minimal and sometimes perform well with remotely collected data. The Hamon method requires average daily air temperature, and the USWB method requires daily averages of air temperature, relative humidity, wind speed, and solar radiation. Estimates of annual lake evaporation from pan data are frequently within 20 percent of energy-budget method estimates. Results of evaporation estimates from the Hamon method and the USWB method were compared against historical pan data at five selected reservoirs in Texas (Benbrook Lake, Canyon Lake, Granger Lake, Hords Creek Lake, and Sam Rayburn Lake) to evaluate their performance and to develop coefficients to minimize bias for the purpose of estimating reservoir evaporation with accuracies similar to estimates of evaporation obtained from pan data. The modified Hamon method estimates of reservoir evaporation were similar to estimates of reservoir evaporation from pan data for daily, monthly, and annual time periods. The modified Hamon method estimates of annual reservoir evaporation were always within 20 percent of annual reservoir evaporation from pan data. Unmodified and modified USWB method estimates of annual reservoir evaporation were within 20 percent of annual reservoir evaporation from pan data for about 91 percent of the years compared. Average daily differences between modified USWB method estimates and estimates from pan data as a percentage of the average amount of daily evaporation from pan data were within 20 percent for 98 percent of the months. Without any modification to the USWB method, average daily differences as a percentage of the average amount of daily evaporation from pan data were within 20 percent for 73 percent of the months. Use of the unmodified USWB method is appealing because it means estimates of average daily reservoir evaporation can be made from air temperature, relative humidity, wind speed, and solar radiation data collected from remote weather stations without the need to develop site-specific coefficients from historical pan data. Site-specific coefficients would need to be developed for the modified version of the Hamon method.
Multivariate survivorship analysis using two cross-sectional samples.
Hill, M E
1999-11-01
As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.
Heinesen, Eskil; Kolodziejczyk, Christophe
2013-12-01
We estimate causal effects of breast and colorectal cancer on labour market outcomes 1-3 years after the diagnosis. Based on Danish administrative data we estimate average treatment effects on the treated by propensity score weighting methods using persons with no cancer diagnosis as control group. We conduct robustness checks using matching, difference-in-differences methods and an alternative control group of later cancer patients. The different methods give approximately the same results. Cancer increases the risks of leaving the labour force and receiving disability pension, and the effects are larger for the less educated. Effects on income are small and mostly insignificant. We investigate some of the mechanisms which may be important in explaining the educational gradient in effects of cancer on labour market attachment. Copyright © 2013 Elsevier B.V. All rights reserved.
Rapid Estimation of TPH Reduction in Oil-Contaminated Soils Using the MED Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edenborn, H.M.; Zenone, V.A.
2007-09-01
Oil-contaminated soil and sludge generated during federal well plugging activities in northwestern Pennsylvania are currently remediated on small landfarm sites in lieu of more expensive landfill disposal. Bioremediation success at these sites in the past has been gauged by the decrease in total petroleum hydrocarbon (TPH) concentrations to less than 10,000 mg/kg measured using EPA Method 418.1. We tested the “molarity of ethanol droplet” (MED) water repellency test as a rapid indicator of TPH concentration in soil at one landfarm near Bradford, PA. MED was estimated by determining the minimum ethanol concentration (0 – 6 M) required to penetrate air-driedmore » and sieved soil samples within 10 sec. TPH in soil was analyzed by rapid fluorometric analysis of methanol soil extracts, which correlated well with EPA Method 1664. Uncontaminated landfarm site soil amended with increasing concentrations of waste oil sludge showed a high correlation between MED and TPH. MED values exceeded the upper limit of 6 M as TPH estimates exceed ca. 25,000 mg/kg. MED and TPH at the land farm were sampled monthly during summer months over two years in a grid pattern that allowed spatial comparisons of site remediation effectiveness. MED and TPH decreased at a constant rate over time and remained highly correlated. Inexpensive alternatives to reagent-grade ethanol gave comparable results. The simple MED approach served as an inexpensive alternative to the routine laboratory analysis of TPH during the monitoring of oily waste bioremediation at this landfarm site.« less
Cross-validation and Peeling Strategies for Survival Bump Hunting using Recursive Peeling Methods
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
We introduce a framework to build a survival/risk bump hunting model with a censored time-to-event response. Our Survival Bump Hunting (SBH) method is based on a recursive peeling procedure that uses a specific survival peeling criterion derived from non/semi-parametric statistics such as the hazards-ratio, the log-rank test or the Nelson--Aalen estimator. To optimize the tuning parameter of the model and validate it, we introduce an objective function based on survival or prediction-error statistics, such as the log-rank test and the concordance error rate. We also describe two alternative cross-validation techniques adapted to the joint task of decision-rule making by recursive peeling and survival estimation. Numerical analyses show the importance of replicated cross-validation and the differences between criteria and techniques in both low and high-dimensional settings. Although several non-parametric survival models exist, none addresses the problem of directly identifying local extrema. We show how SBH efficiently estimates extreme survival/risk subgroups unlike other models. This provides an insight into the behavior of commonly used models and suggests alternatives to be adopted in practice. Finally, our SBH framework was applied to a clinical dataset. In it, we identified subsets of patients characterized by clinical and demographic covariates with a distinct extreme survival outcome, for which tailored medical interventions could be made. An R package PRIMsrc (Patient Rule Induction Method in Survival, Regression and Classification settings) is available on CRAN (Comprehensive R Archive Network) and GitHub. PMID:27034730
Face pose tracking using the four-point algorithm
NASA Astrophysics Data System (ADS)
Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen
2017-06-01
In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.
A non-linear regression method for CT brain perfusion analysis
NASA Astrophysics Data System (ADS)
Bennink, E.; Oosterbroek, J.; Viergever, M. A.; Velthuis, B. K.; de Jong, H. W. A. M.
2015-03-01
CT perfusion (CTP) imaging allows for rapid diagnosis of ischemic stroke. Generation of perfusion maps from CTP data usually involves deconvolution algorithms providing estimates for the impulse response function in the tissue. We propose the use of a fast non-linear regression (NLR) method that we postulate has similar performance to the current academic state-of-art method (bSVD), but that has some important advantages, including the estimation of vascular permeability, improved robustness to tracer-delay, and very few tuning parameters, that are all important in stroke assessment. The aim of this study is to evaluate the fast NLR method against bSVD and a commercial clinical state-of-art method. The three methods were tested against a published digital perfusion phantom earlier used to illustrate the superiority of bSVD. In addition, the NLR and clinical methods were also tested against bSVD on 20 clinical scans. Pearson correlation coefficients were calculated for each of the tested methods. All three methods showed high correlation coefficients (>0.9) with the ground truth in the phantom. With respect to the clinical scans, the NLR perfusion maps showed higher correlation with bSVD than the perfusion maps from the clinical method. Furthermore, the perfusion maps showed that the fast NLR estimates are robust to tracer-delay. In conclusion, the proposed fast NLR method provides a simple and flexible way of estimating perfusion parameters from CT perfusion scans, with high correlation coefficients. This suggests that it could be a better alternative to the current clinical and academic state-of-art methods.
Estimation of under-reporting in epidemics using approximations.
Gamado, Kokouvi; Streftaris, George; Zachary, Stan
2017-06-01
Under-reporting in epidemics, when it is ignored, leads to under-estimation of the infection rate and therefore of the reproduction number. In the case of stochastic models with temporal data, a usual approach for dealing with such issues is to apply data augmentation techniques through Bayesian methodology. Departing from earlier literature approaches implemented using reversible jump Markov chain Monte Carlo (RJMCMC) techniques, we make use of approximations to obtain faster estimation with simple MCMC. Comparisons among the methods developed here, and with the RJMCMC approach, are carried out and highlight that approximation-based methodology offers useful alternative inference tools for large epidemics, with a good trade-off between time cost and accuracy.
Estimating vegetative biomass from LANDSAT-1 imagery for range management
NASA Technical Reports Server (NTRS)
Seevers, P. M.; Drew, J. V.; Carlson, M. P.
1975-01-01
Evaluation of LANDSAT-1, band 5 data for use in estimation of vegetative biomass for range management decisions was carried out for five selected range sites in the Sandhills region of Nebraska. Analysis of sets of optical density-vegetative biomass data indicated that comparisons of biomass estimation could be made within one frame but not between frames without correction factors. There was high correlation among sites within sets of radiance value-vegetative biomass data and also between sets, indicating comparisons of biomass could be made within and between frames. Landsat-1 data are shown to be a viable alternative to currently used methods of determining vegetative biomass production and stocking rate recommendations for Sandhills rangeland.
Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models
NASA Astrophysics Data System (ADS)
Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea
2014-05-01
Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.
Gowda, Charitha; Dong, Shiming; Potter, Rachel C.; Dombkowski, Kevin J.; Stokley, Shannon
2013-01-01
Objective Immunization information systems (IISs) are valuable surveillance tools; however, population relocation may introduce bias when determining immunization coverage. We explored alternative methods for estimating the vaccine-eligible population when calculating adolescent immunization levels using a statewide IIS. Methods We performed a retrospective analysis of the Michigan State Care Improvement Registry (MCIR) for all adolescents aged 11–18 years registered in the MCIR as of October 2010. We explored four methods for determining denominators: (1) including all adolescents with MCIR records, (2) excluding adolescents with out-of-state residence, (3) further excluding those without MCIR activity ≥10 years prior to the evaluation date, and (4) using a denominator based on U.S. Census data. We estimated state- and county-specific coverage levels for four adolescent vaccines. Results We found a 20% difference in estimated vaccination coverage between the most inclusive and restrictive denominator populations. Although there was some variability among the four methods in vaccination at the state level (2%–11%), greater variation occurred at the county level (up to 21%). This variation was substantial enough to potentially impact public health assessments of immunization programs. Generally, vaccines with higher coverage levels had greater absolute variation, as did counties with smaller populations. Conclusion At the county level, using the four denominator calculation methods resulted in substantial differences in estimated adolescent immunization rates that were less apparent when aggregated at the state level. Further research is needed to ascertain the most appropriate method for estimating vaccine coverage levels using IIS data. PMID:24179260
2014-01-01
Background Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. Results We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Conclusions Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality. PMID:24980787
Teramoto, Reiji; Saito, Chiaki; Funahashi, Shin-ichi
2014-06-30
Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality.
Analysing malaria drug trials on a per-individual or per-clone basis: a comparison of methods.
Jaki, Thomas; Parry, Alice; Winter, Katherine; Hastings, Ian
2013-07-30
There are a variety of methods used to estimate the effectiveness of antimalarial drugs in clinical trials, invariably on a per-person basis. A person, however, may have more than one malaria infection present at the time of treatment. We evaluate currently used methods for analysing malaria trials on a per-individual basis and introduce a novel method to estimate the cure rate on a per-infection (clone) basis. We used simulated and real data to highlight the differences of the various methods. We give special attention to classifying outcomes as cured, recrudescent (infections that never fully cleared) or ambiguous on the basis of genetic markers at three loci. To estimate cure rates on a per-clone basis, we used the genetic information within an individual before treatment to determine the number of clones present. We used the genetic information obtained at the time of treatment failure to classify clones as recrudescence or new infections. On the per-individual level, we find that the most accurate methods of classification label an individual as newly infected if all alleles are different at the beginning and at the time of failure and as a recrudescence if all or some alleles were the same. The most appropriate analysis method is survival analysis or alternatively for complete data/per-protocol analysis a proportion estimate that treats new infections as successes. We show that the analysis of drug effectiveness on a per-clone basis estimates the cure rate accurately and allows more detailed evaluation of the performance of the treatment. Copyright © 2012 John Wiley & Sons, Ltd.
Global civil aviation black carbon emissions.
Stettler, Marc E J; Boies, Adam M; Petzold, Andreas; Barrett, Steven R H
2013-09-17
Aircraft black carbon (BC) emissions contribute to climate forcing, but few estimates of BC emitted by aircraft at cruise exist. For the majority of aircraft engines the only BC-related measurement available is smoke number (SN)-a filter based optical method designed to measure near-ground plume visibility, not mass. While the first order approximation (FOA3) technique has been developed to estimate BC mass emissions normalized by fuel burn [EI(BC)] from SN, it is shown that it underestimates EI(BC) by >90% in 35% of directly measured cases (R(2) = -0.10). As there are no plans to measure BC emissions from all existing certified engines-which will be in service for several decades-it is necessary to estimate EI(BC) for existing aircraft on the ground and at cruise. An alternative method, called FOX, that is independent of the SN is developed to estimate BC emissions. Estimates of EI(BC) at ground level are significantly improved (R(2) = 0.68), whereas estimates at cruise are within 30% of measurements. Implementing this approach for global civil aviation estimated aircraft BC emissions are revised upward by a factor of ~3. Direct radiative forcing (RF) due to aviation BC emissions is estimated to be ~9.5 mW/m(2), equivalent to ~1/3 of the current RF due to aviation CO2 emissions.
Dealing with gene expression missing data.
Brás, L P; Menezes, J C
2006-05-01
Compared evaluation of different methods is presented for estimating missing values in microarray data: weighted K-nearest neighbours imputation (KNNimpute), regression-based methods such as local least squares imputation (LLSimpute) and partial least squares imputation (PLSimpute) and Bayesian principal component analysis (BPCA). The influence in prediction accuracy of some factors, such as methods' parameters, type of data relationships used in the estimation process (i.e. row-wise, column-wise or both), missing rate and pattern and type of experiment [time series (TS), non-time series (NTS) or mixed (MIX) experiments] is elucidated. Improvements based on the iterative use of data (iterative LLS and PLS imputation--ILLSimpute and IPLSimpute), the need to perform initial imputations (modified PLS and Helland PLS imputation--MPLSimpute and HPLSimpute) and the type of relationships employed (KNNarray, LLSarray, HPLSarray and alternating PLS--APLSimpute) are proposed. Overall, it is shown that data set properties (type of experiment, missing rate and pattern) affect the data similarity structure, therefore influencing the methods' performance. LLSimpute and ILLSimpute are preferable in the presence of data with a stronger similarity structure (TS and MIX experiments), whereas PLS-based methods (MPLSimpute, IPLSimpute and APLSimpute) are preferable when estimating NTS missing data.
NASA Astrophysics Data System (ADS)
Smoczek, Jaroslaw
2015-10-01
The paper deals with the problem of reducing the residual vibration and limiting the transient oscillations of a flexible and underactuated system with respect to the variation of operating conditions. The comparative study of generalized predictive control (GPC) and fuzzy scheduling scheme developed based on the P1-TS fuzzy theory, local pole placement method and interval analysis of closed-loop system polynomial coefficients is addressed to the problem of flexible crane control. The two alternatives of a GPC-based method are proposed that enable to realize this technique either with or without a sensor of payload deflection. The first control technique is based on the recursive least squares (RLS) method applied to on-line estimate the parameters of a linear parameter varying (LPV) model of a crane dynamic system. The second GPC-based approach is based on a payload deflection feedback estimated using a pendulum model with the parameters interpolated using the P1-TS fuzzy system. Feasibility and applicability of the developed methods were confirmed through experimental verification performed on a laboratory scaled overhead crane.
Shear wave elastography using Wigner-Ville distribution: a simulated multilayer media study.
Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan
2016-08-01
Shear Wave Elastography (SWE) is a quantitative ultrasound-based imaging modality for distinguishing normal and abnormal tissue types by estimating the local viscoelastic properties of the tissue. These properties have been estimated in many studies by propagating ultrasound shear wave within the tissue and estimating parameters such as speed of wave. Vast majority of the proposed techniques are based on the cross-correlation of consecutive ultrasound images. In this study, we propose a new method of wave detection based on time-frequency (TF) analysis of the ultrasound signal. The proposed method is a modified version of the Wigner-Ville Distribution (WVD) technique. The TF components of the wave are detected in a propagating ultrasound wave within a simulated multilayer tissue and the local properties are estimated based on the detected waves. Image processing techniques such as Alternative Sequential Filters (ASF) and Circular Hough Transform (CHT) have been utilized to improve the estimation of TF components. This method has been applied to a simulated data from Wave3000™ software (CyberLogic Inc., New York, NY). This data simulates the propagation of an acoustic radiation force impulse within a two-layer tissue with slightly different viscoelastic properties between the layers. By analyzing the local TF components of the wave, we estimate the longitudinal and shear elasticities and viscosities of the media. This work shows that our proposed method is capable of distinguishing between different layers of a tissue.
Single neuron modeling and data assimilation in BNST neurons
NASA Astrophysics Data System (ADS)
Farsian, Reza
Neurons, although tiny in size, are vastly complicated systems, which are responsible for the most basic yet essential functions of any nervous system. Even the most simple models of single neurons are usually high dimensional, nonlinear, and contain many parameters and states which are unobservable in a typical neurophysiological experiment. One of the most fundamental problems in experimental neurophysiology is the estimation of these parameters and states, since knowing their values is essential in identification, model construction, and forward prediction of biological neurons. Common methods of parameter and state estimation do not perform well for neural models due to their high dimensionality and nonlinearity. In this dissertation, two alternative approaches for parameters and state estimation of biological neurons have been demonstrated: dynamical parameter estimation (DPE) and a Markov Chain Monte Carlo (MCMC) method. The first method uses elements of chaos control and synchronization theory for parameter and state estimation. MCMC is a statistical approach which uses a path integral formulation to evaluate a mean and an error bound for these unobserved parameters and states. These methods have been applied to biological system of neurons in Bed Nucleus of Stria Termialis neurons (BNST) of rats. State and parameters of neurons in both systems were estimated, and their value were used for recreating a realistic model and predicting the behavior of the neurons successfully. The knowledge of biological parameters can ultimately provide a better understanding of the internal dynamics of a neuron in order to build robust models of neuron networks.
Tang, Ming; Liao, Huchang; Li, Zongmin; Xu, Zeshui
2018-04-13
Because the natural disaster system is a very comprehensive and large system, the disaster reduction scheme must rely on risk analysis. Experts' knowledge and experiences play a critical role in disaster risk assessment. The hesitant fuzzy linguistic preference relation is an effective tool to express experts' preference information when comparing pairwise alternatives. Owing to the lack of knowledge or a heavy workload, information may be missed in the hesitant fuzzy linguistic preference relation. Thus, an incomplete hesitant fuzzy linguistic preference relation is constructed. In this paper, we firstly discuss some properties of the additive consistent hesitant fuzzy linguistic preference relation. Next, the incomplete hesitant fuzzy linguistic preference relation, the normalized hesitant fuzzy linguistic preference relation, and the acceptable hesitant fuzzy linguistic preference relation are defined. Afterwards, three procedures to estimate the missing information are proposed. The first one deals with the situation in which there are only n-1 known judgments involving all the alternatives; the second one is used to estimate the missing information of the hesitant fuzzy linguistic preference relation with more known judgments; while the third procedure is used to deal with ignorance situations in which there is at least one alternative with totally missing information. Furthermore, an algorithm for group decision making with incomplete hesitant fuzzy linguistic preference relations is given. Finally, we illustrate our model with a case study about flood disaster risk evaluation. A comparative analysis is presented to testify the advantage of our method.
Cortés, Camilo; de Los Reyes-Guzmán, Ana; Scorza, Davide; Bertelsen, Álvaro; Carrasco, Eduardo; Gil-Agudo, Ángel; Ruiz-Salguero, Oscar; Flórez, Julián
2016-01-01
Robot-Assisted Rehabilitation (RAR) is relevant for treating patients affected by nervous system injuries (e.g., stroke and spinal cord injury). The accurate estimation of the joint angles of the patient limbs in RAR is critical to assess the patient improvement. The economical prevalent method to estimate the patient posture in Exoskeleton-based RAR is to approximate the limb joint angles with the ones of the Exoskeleton. This approximation is rough since their kinematic structures differ. Motion capture systems (MOCAPs) can improve the estimations, at the expenses of a considerable overload of the therapy setup. Alternatively, the Extended Inverse Kinematics Posture Estimation (EIKPE) computational method models the limb and Exoskeleton as differing parallel kinematic chains. EIKPE has been tested with single DOF movements of the wrist and elbow joints. This paper presents the assessment of EIKPE with elbow-shoulder compound movements (i.e., object prehension). Ground-truth for estimation assessment is obtained from an optical MOCAP (not intended for the treatment stage). The assessment shows EIKPE rendering a good numerical approximation of the actual posture during the compound movement execution, especially for the shoulder joint angles. This work opens the horizon for clinical studies with patient groups, Exoskeleton models, and movements types.
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.
Donné, Simon; Goossens, Bart; Philips, Wilfried
2017-08-23
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.