Warren B. Cohen; Hans-Erik Andersen; Sean P. Healey; Gretchen G. Moisen; Todd A. Schroeder; Christopher W. Woodall; Grant M. Domke; Zhiqiang Yang; Robert E. Kennedy; Stephen V. Stehman; Curtis Woodcock; Jim Vogelmann; Zhe Zhu; Chengquan Huang
2015-01-01
We are developing a system that provides temporally consistent biomass estimates for national greenhouse gas inventory reporting to the United Nations Framework Convention on Climate Change. Our model-assisted estimation framework relies on remote sensing to scale from plot measurements to lidar strip samples, to Landsat time series-based maps. As a demonstration, new...
Multilevel Modeling with Correlated Effects
ERIC Educational Resources Information Center
Kim, Jee-Seon; Frees, Edward W.
2007-01-01
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…
A Sparse Matrix Approach for Simultaneous Quantification of Nystagmus and Saccade
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Stone, Lee; Boyle, Richard D.
2012-01-01
The vestibulo-ocular reflex (VOR) consists of two intermingled non-linear subsystems; namely, nystagmus and saccade. Typically, nystagmus is analysed using a single sufficiently long signal or a concatenation of them. Saccade information is not analysed and discarded due to insufficient data length to provide consistent and minimum variance estimates. This paper presents a novel sparse matrix approach to system identification of the VOR. It allows for the simultaneous estimation of both nystagmus and saccade signals. We show via simulation of the VOR that our technique provides consistent and unbiased estimates in the presence of output additive noise.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
The Benefits of Air and Water Pollution Control: A Review and Synthesis of Recent Estimates (1979)
Report provides a survey and critical review of the existing literature (by late 1970s) giving estimates of national benefits or damages, adopting a common framework to provide consistent estimates of air and water pollution benefits.
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.
Probability machines: consistent probability estimation using nonparametric learning machines.
Malley, J D; Kruppa, J; Dasgupta, A; Malley, K G; Ziegler, A
2012-01-01
Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.
Posterior consistency in conditional distribution estimation
Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.
2014-01-01
A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858
Robust versus consistent variance estimators in marginal structural Cox models.
Enders, Dirk; Engel, Susanne; Linder, Roland; Pigeot, Iris
2018-06-11
In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes. Copyright © 2018 John Wiley & Sons, Ltd.
Relative-Error-Covariance Algorithms
NASA Technical Reports Server (NTRS)
Bierman, Gerald J.; Wolff, Peter J.
1991-01-01
Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.
Practical Issues in Estimating Classification Accuracy and Consistency with R Package cacIRT
ERIC Educational Resources Information Center
Lathrop, Quinn N.
2015-01-01
There are two main lines of research in estimating classification accuracy (CA) and classification consistency (CC) under Item Response Theory (IRT). The R package cacIRT provides computer implementations of both approaches in an accessible and unified framework. Even with available implementations, there remains decisions a researcher faces when…
NASA Technical Reports Server (NTRS)
Kidd, Chris; Matsui, Toshi; Chern, Jiundar; Mohr, Karen; Kummerow, Christian; Randel, Dave
2015-01-01
The estimation of precipitation across the globe from satellite sensors provides a key resource in the observation and understanding of our climate system. Estimates from all pertinent satellite observations are critical in providing the necessary temporal sampling. However, consistency in these estimates from instruments with different frequencies and resolutions is critical. This paper details the physically based retrieval scheme to estimate precipitation from cross-track (XT) passive microwave (PM) sensors on board the constellation satellites of the Global Precipitation Measurement (GPM) mission. Here the Goddard profiling algorithm (GPROF), a physically based Bayesian scheme developed for conically scanning (CS) sensors, is adapted for use with XT PM sensors. The present XT GPROF scheme utilizes a model-generated database to overcome issues encountered with an observational database as used by the CS scheme. The model database ensures greater consistency across meteorological regimes and surface types by providing a more comprehensive set of precipitation profiles. The database is corrected for bias against the CS database to ensure consistency in the final product. Statistical comparisons over western Europe and the United States show that the XT GPROF estimates are comparable with those from the CS scheme. Indeed, the XT estimates have higher correlations against surface radar data, while maintaining similar root-mean-square errors. Latitudinal profiles of precipitation show the XT estimates are generally comparable with the CS estimates, although in the southern midlatitudes the peak precipitation is shifted equatorward while over the Arctic large differences are seen between the XT and the CS retrievals.
Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration
NASA Astrophysics Data System (ADS)
Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola
In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.
Reconciling medical expenditure estimates from the MEPS and NHEA, 2007.
Bernard, Didem; Cowan, Cathy; Selden, Thomas; Cai, Liming; Catlin, Aaron; Heffler, Stephen
2012-01-01
Provide a comparison of health care expenditure estimates for 2007 from the Medical Expenditure Panel Survey (MEPS) and the National Health Expenditure Accounts (NHEA). Reconciling these estimates serves two important purposes. First, it is an important quality assurance exercise for improving and ensuring the integrity of each source's estimates. Second, the reconciliation provides a consistent baseline of health expenditure data for policy simulations. Our results assist researchers to adjust MEPS to be consistent with the NHEA so that the projected costs as well as budgetary and tax implications of any policy change are consistent with national health spending estimates. The Medical Expenditure Panel Survey produced by the Agency for Healthcare Research and Quality, and the National Health Center for Health Statistics and the National Health Expenditures produced by the Centers for Medicare & Medicaid Service's Office of the Actuary. In this study, we focus on the personal health care (PHC) sector, which includes the goods and services rendered to treat or prevent a specific disease or condition in an individual. The official 2007 NHEA estimate for PHC spending is $1,915 billion and the MEPS estimate is $1,126 billion. Adjusting the NHEA estimates for differences in underlying populations, covered services, and other measurement concepts reduces the NHEA estimate for 2007 to $1,366 billion. As a result, MEPS is $240 billion, or 17.6 percent, less than the adjusted NHEA total.
2013-09-01
model and the BRDF in the SRP model are not consistent with each other, then the resulting estimated albedo-areas and mass are inaccurate and biased...This work studies the use of physically consistent BRDF -SRP models for mass estimation. Simulation studies are used to provide an indication of the...benefits of using these new models . An unscented Kalman filter approach that includes BRDF and mass parameters in the state vector is used. The
Krix, Alana C.; Sauerland, Melanie; Lorei, Clemens; Rispens, Imke
2015-01-01
In the legal system, inconsistencies in eyewitness accounts are often used to discredit witnesses’ credibility. This is at odds with research findings showing that witnesses frequently report reminiscent details (details previously unrecalled) at an accuracy rate that is nearly as high as for consistently recalled information. The present study sought to put the validity of beliefs about recall consistency to a test by directly comparing them with actual memory performance in two recall attempts. All participants watched a film of a staged theft. Subsequently, the memory group (N = 84) provided one statement immediately after the film (either with the Self-Administered Interview or free recall) and one after a one-week delay. The estimation group (N = 81) consisting of experienced police detectives estimated the recall performance of the memory group. The results showed that actual recall performance was consistently underestimated. Also, a sharp decline of memory performance between recall attempts was assumed by the estimation group whereas actual accuracy remained stable. While reminiscent details were almost as accurate as consistent details, they were estimated to be much less accurate than consistent information and as inaccurate as direct contradictions. The police detectives expressed a great concern that reminiscence was the result of suggestive external influences. In conclusion, it seems that experienced police detectives hold many implicit beliefs about recall consistency that do not correspond with actual recall performance. Recommendations for police trainings are provided. These aim at fostering a differentiated view on eyewitness performance and the inclusion of more comprehensive classes on human memory structure. PMID:25695428
Krix, Alana C; Sauerland, Melanie; Lorei, Clemens; Rispens, Imke
2015-01-01
In the legal system, inconsistencies in eyewitness accounts are often used to discredit witnesses' credibility. This is at odds with research findings showing that witnesses frequently report reminiscent details (details previously unrecalled) at an accuracy rate that is nearly as high as for consistently recalled information. The present study sought to put the validity of beliefs about recall consistency to a test by directly comparing them with actual memory performance in two recall attempts. All participants watched a film of a staged theft. Subsequently, the memory group (N = 84) provided one statement immediately after the film (either with the Self-Administered Interview or free recall) and one after a one-week delay. The estimation group (N = 81) consisting of experienced police detectives estimated the recall performance of the memory group. The results showed that actual recall performance was consistently underestimated. Also, a sharp decline of memory performance between recall attempts was assumed by the estimation group whereas actual accuracy remained stable. While reminiscent details were almost as accurate as consistent details, they were estimated to be much less accurate than consistent information and as inaccurate as direct contradictions. The police detectives expressed a great concern that reminiscence was the result of suggestive external influences. In conclusion, it seems that experienced police detectives hold many implicit beliefs about recall consistency that do not correspond with actual recall performance. Recommendations for police trainings are provided. These aim at fostering a differentiated view on eyewitness performance and the inclusion of more comprehensive classes on human memory structure.
Causal inference with measurement error in outcomes: Bias analysis and estimation methods.
Shu, Di; Yi, Grace Y
2017-01-01
Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.
New estimates of the CMB angular power spectra from the WMAP 5 year low-resolution data
NASA Astrophysics Data System (ADS)
Gruppuso, A.; de Rosa, A.; Cabella, P.; Paci, F.; Finelli, F.; Natoli, P.; de Gasperis, G.; Mandolesi, N.
2009-11-01
A quadratic maximum likelihood (QML) estimator is applied to the Wilkinson Microwave Anisotropy Probe (WMAP) 5 year low-resolution maps to compute the cosmic microwave background angular power spectra (APS) at large scales for both temperature and polarization. Estimates and error bars for the six APS are provided up to l = 32 and compared, when possible, to those obtained by the WMAP team, without finding any inconsistency. The conditional likelihood slices are also computed for the Cl of all the six power spectra from l = 2 to 10 through a pixel-based likelihood code. Both the codes treat the covariance for (T, Q, U) in a single matrix without employing any approximation. The inputs of both the codes (foreground-reduced maps, related covariances and masks) are provided by the WMAP team. The peaks of the likelihood slices are always consistent with the QML estimates within the error bars; however, an excellent agreement occurs when the QML estimates are used as a fiducial power spectrum instead of the best-fitting theoretical power spectrum. By the full computation of the conditional likelihood on the estimated spectra, the value of the temperature quadrupole CTTl=2 is found to be less than 2σ away from the WMAP 5 year Λ cold dark matter best-fitting value. The BB spectrum is found to be well consistent with zero, and upper limits on the B modes are provided. The parity odd signals TB and EB are found to be consistent with zero.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-26
... provides a consistent time series according to which groundfish resources may be managed more efficiently...: Business or other for-profit organizations. Estimated Number of Respondents: 166. Estimated Time per...
Estimated timber harvest by U.S. region and ownership, 1950-2002.
Darius M. Adams; Richard W. Haynes; Adam J. Daigneault
2006-01-01
This publication provides estimates of total softwood and hardwood harvests by region and owner for the United States from 1950 to 2002. These data are generally not available in a consistent fashion and have to be estimated from state-level data, forest resource inventory statistics, and production of forest products. This publication describes the estimation process...
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
A visual training tool for the Photoload sampling technique
Violet J. Holley; Robert E. Keane
2010-01-01
This visual training aid is designed to provide Photoload users a tool to increase the accuracy of fuel loading estimations when using the Photoload technique. The Photoload Sampling Technique (RMRS-GTR-190) provides fire managers a sampling method for obtaining consistent, accurate, inexpensive, and quick estimates of fuel loading. It is designed to require only one...
Process for estimating likelihood and confidence in post detonation nuclear forensics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.; Craft, Charles M.
2014-07-01
Technical nuclear forensics (TNF) must provide answers to questions of concern to the broader community, including an estimate of uncertainty. There is significant uncertainty associated with post-detonation TNF. The uncertainty consists of a great deal of epistemic (state of knowledge) as well as aleatory (random) uncertainty, and many of the variables of interest are linguistic (words) and not numeric. We provide a process by which TNF experts can structure their process for answering questions and provide an estimate of uncertainty. The process uses belief and plausibility, fuzzy sets, and approximate reasoning.
Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information
NASA Technical Reports Server (NTRS)
Howell, L. W., Jr.
2003-01-01
A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.
Consistent estimate of ocean warming, land ice melt and sea level rise from Observations
NASA Astrophysics Data System (ADS)
Blazquez, Alejandro; Meyssignac, Benoît; Lemoine, Jean Michel
2016-04-01
Based on the sea level budget closure approach, this study investigates the consistency of observed Global Mean Sea Level (GMSL) estimates from satellite altimetry, observed Ocean Thermal Expansion (OTE) estimates from in-situ hydrographic data (based on Argo for depth above 2000m and oceanic cruises below) and GRACE observations of land water storage and land ice melt for the period January 2004 to December 2014. The consistency between these datasets is a key issue if we want to constrain missing contributions to sea level rise such as the deep ocean contribution. Numerous previous studies have addressed this question by summing up the different contributions to sea level rise and comparing it to satellite altimetry observations (see for example Llovel et al. 2015, Dieng et al. 2015). Here we propose a novel approach which consists in correcting GRACE solutions over the ocean (essentially corrections of stripes and leakage from ice caps) with mass observations deduced from the difference between satellite altimetry GMSL and in-situ hydrographic data OTE estimates. We check that the resulting GRACE corrected solutions are consistent with original GRACE estimates of the geoid spherical harmonic coefficients within error bars and we compare the resulting GRACE estimates of land water storage and land ice melt with independent results from the literature. This method provides a new mass redistribution from GRACE consistent with observations from Altimetry and OTE. We test the sensibility of this method to the deep ocean contribution and the GIA models and propose best estimates.
Use of Internal Consistency Coefficients for Estimating Reliability of Experimental Tasks Scores
Green, Samuel B.; Yang, Yanyun; Alt, Mary; Brinkley, Shara; Gray, Shelley; Hogan, Tiffany; Cowan, Nelson
2017-01-01
Reliabilities of scores for experimental tasks are likely to differ from one study to another to the extent that the task stimuli change, the number of trials varies, the type of individuals taking the task changes, the administration conditions are altered, or the focal task variable differs. Given reliabilities vary as a function of the design of these tasks and the characteristics of the individuals taking them, making inferences about the reliability of scores in an ongoing study based on reliability estimates from prior studies is precarious. Thus, it would be advantageous to estimate reliability based on data from the ongoing study. We argue that internal consistency estimates of reliability are underutilized for experimental task data and in many applications could provide this information using a single administration of a task. We discuss different methods for computing internal consistency estimates with a generalized coefficient alpha and the conditions under which these estimates are accurate. We illustrate use of these coefficients using data for three different tasks. PMID:26546100
FIESTA—An R estimation tool for FIA analysts
Tracey S. Frescino; Paul L. Patterson; Gretchen G. Moisen; Elizabeth A. Freeman
2015-01-01
FIESTA (Forest Inventory ESTimation for Analysis) is a user-friendly R package that was originally developed to support the production of estimates consistent with current tools available for the Forest Inventory and Analysis (FIA) National Program, such as FIDO (Forest Inventory Data Online) and EVALIDator. FIESTA provides an alternative data retrieval and reporting...
Comparison of techniques for estimating annual lake evaporation using climatological data
Andersen, M.E.; Jobson, H.E.
1982-01-01
Mean annual evaporation estimates were determined for 30 lakes by use of a numerical model (Morton, 1979) and by use of an evaporation map prepared by the U.S. Weather Service (Kohler et al., 1959). These estimates were compared to the reported value of evaporation determined from measurements on each lake. Various lengths of observation and methods of measurement were used among the 30 lakes. The evaporation map provides annual evaporation estimates which are more consistent with observations than those determined by use of the numerical model. The map cannot provide monthly estimates, however, and is only available for the contiguous United States. The numerical model can provide monthly estimates for shallow lakes and is based on monthly observations of temperature, humidity, and sunshine duration.
MUSIC algorithm DoA estimation for cooperative node location in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Warty, Chirag; Yu, Richard Wai; ElMahgoub, Khaled; Spinsante, Susanna
In recent years the technological development has encouraged several applications based on distributed communications network without any fixed infrastructure. The problem of providing a collaborative early warning system for multiple mobile nodes against a fast moving object. The solution is provided subject to system level constraints: motion of nodes, antenna sensitivity and Doppler effect at 2.4 GHz and 5.8 GHz. This approach consists of three stages. The first phase consists of detecting the incoming object using a highly directive two element antenna at 5.0 GHz band. The second phase consists of broadcasting the warning message using a low directivity broad antenna beam using 2× 2 antenna array which then in third phase will be detected by receiving nodes by using direction of arrival (DOA) estimation technique. The DOA estimation technique is used to estimate the range and bearing of the incoming nodes. The position of fast arriving object can be estimated using the MUSIC algorithm for warning beam DOA estimation. This paper is mainly intended to demonstrate the feasibility of early detection and warning system using a collaborative node to node communication links. The simulation is performed to show the behavior of detecting and broadcasting antennas as well as performance of the detection algorithm. The idea can be further expanded to implement commercial grade detection and warning system
iGLASS: An Improvement to the GLASS Method for Estimating Species Trees from Gene Trees
Rosenberg, Noah A.
2012-01-01
Abstract Several methods have been designed to infer species trees from gene trees while taking into account gene tree/species tree discordance. Although some of these methods provide consistent species tree topology estimates under a standard model, most either do not estimate branch lengths or are computationally slow. An exception, the GLASS method of Mossel and Roch, is consistent for the species tree topology, estimates branch lengths, and is computationally fast. However, GLASS systematically overestimates divergence times, leading to biased estimates of species tree branch lengths. By assuming a multispecies coalescent model in which multiple lineages are sampled from each of two taxa at L independent loci, we derive the distribution of the waiting time until the first interspecific coalescence occurs between the two taxa, considering all loci and measuring from the divergence time. We then use the mean of this distribution to derive a correction to the GLASS estimator of pairwise divergence times. We show that our improved estimator, which we call iGLASS, consistently estimates the divergence time between a pair of taxa as the number of loci approaches infinity, and that it is an unbiased estimator of divergence times when one lineage is sampled per taxon. We also show that many commonly used clustering methods can be combined with the iGLASS estimator of pairwise divergence times to produce a consistent estimator of the species tree topology. Through simulations, we show that iGLASS can greatly reduce the bias and mean squared error in obtaining estimates of divergence times in a species tree. PMID:22216756
Li, Jiahui; Yu, Qiqing
2016-01-01
Dinse (Biometrics, 38:417-431, 1982) provides a special type of right-censored and masked competing risks data and proposes a non-parametric maximum likelihood estimator (NPMLE) and a pseudo MLE of the joint distribution function [Formula: see text] with such data. However, their asymptotic properties have not been studied so far. Under the extention of either the conditional masking probability (CMP) model or the random partition masking (RPM) model (Yu and Li, J Nonparametr Stat 24:753-764, 2012), we show that (1) Dinse's estimators are consistent if [Formula: see text] takes on finitely many values and each point in the support set of [Formula: see text] can be observed; (2) if the failure time is continuous, the NPMLE is not uniquely determined, and the standard approach (which puts weights only on one element in each observed set) leads to an inconsistent NPMLE; (3) in general, Dinse's estimators are not consistent even under the discrete assumption; (4) we construct a consistent NPMLE. The consistency is given under a new model called dependent masking and right-censoring model. The CMP model and the RPM model are indeed special cases of the new model. We compare our estimator to Dinse's estimators through simulation and real data. Simulation study indicates that the consistent NPMLE is a good approximation to the underlying distribution for moderate sample sizes.
Comparing floral and isotopic paleoelevation estimates: Examples from the western United States
NASA Astrophysics Data System (ADS)
Hyland, E. G.; Huntington, K. W.; Sheldon, N. D.; Smith, S. Y.; Strömberg, C. A. E.
2016-12-01
Describing paleoelevations is crucial to understanding tectonic processes and deconvolving the effects of uplift and climate on environmental change in the past. Decades of work has gone into estimating past elevation from various proxy archives, particularly using modern relationships between elevation and temperature, floral assemblage compositions, or oxygen isotope values. While these methods have been used widely and refined through time, they are rarely applied in tandem; here we provide two examples from the western United States using new multiproxy methods: 1) combining clumped isotopes and macrofloral assemblages to estimate paleoelevations along the Colorado Plateau, and 2) combining oxygen isotopes and phytolith methods to estimate paleoelevations within the greater Yellowstone region. Clumped isotope measurements and refined floral coexistence methods from sites on the northern Colorado Plateau like Florissant and Creede (CO) consistently estimate low (< 2km) elevations through the Eocene/Oligocene, suggesting slower uplift and a south-north propagation of the plateau. Oxygen isotope measurements and C4 phytolith estimates from sites surrounding the Yellowstone hotspot consistently estimate moderate uplift (0.2-0.7km) propagating along the hotspot track, suggesting migrating dynamic topography associated with the region. These examples provide support for the emerging practice of using multiproxy methods to estimate paleoelevations for important time periods, and can help integrate environmental and tectonic records of the past.
Empirical evidence for site coefficients in building code provisions
Borcherdt, R.D.
2002-01-01
Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.
Assisted Perception, Planning and Control for Remote Mobility and Dexterous Manipulation
2017-04-01
on unmanned aerial vehicles (UAVs). The underlying algorithm is based on an Extended Kalman Filter (EKF) that simultaneously estimates robot state...and sensor biases. The filter developed provided a probabilistic fusion of sensor data from many modalities to produce a single consistent position...estimation for a walking humanoid. Given a prior map using a Gaussian particle filter , the LIDAR based system is able to provide a drift-free
EVALUATING SOIL EROSION PARAMETER ESTIMATES FROM DIFFERENT DATA SOURCES
Topographic factors and soil loss estimates that were derived from thee data sources (STATSGO, 30-m DEM, and 3-arc second DEM) were compared. Slope magnitudes derived from the three data sources were consistently different. Slopes from the DEMs tended to provide a flattened sur...
Oceanic Fluxes of Mass, Heat and Freshwater: A Global Estimate and Perspective
NASA Technical Reports Server (NTRS)
MacDonald, Alison Marguerite
1995-01-01
Data from fifteen globally distributed, modern, high resolution, hydrographic oceanic transects are combined in an inverse calculation using large scale box models. The models provide estimates of the global meridional heat and freshwater budgets and are used to examine the sensitivity of the global circulation, both inter and intra-basin exchange rates, to a variety of external constraints provided by estimates of Ekman, boundary current and throughflow transports. A solution is found which is consistent with both the model physics and the global data set, despite a twenty five year time span and a lack of seasonal consistency among the data. The overall pattern of the global circulation suggested by the models is similar to that proposed in previously published local studies and regional reviews. However, significant qualitative and quantitative differences exist. These differences are due both to the model definition and to the global nature of the data set.
On a Formal Tool for Reasoning About Flight Software Cost Analysis
NASA Technical Reports Server (NTRS)
Spagnuolo, John N., Jr.; Stukes, Sherry A.
2013-01-01
A report focuses on the development of flight software (FSW) cost estimates for 16 Discovery-class missions at JPL. The techniques and procedures developed enabled streamlining of the FSW analysis process, and provided instantaneous confirmation that the data and processes used for these estimates were consistent across all missions. The research provides direction as to how to build a prototype rule-based system for FSW cost estimation that would provide (1) FSW cost estimates, (2) explanation of how the estimates were arrived at, (3) mapping of costs, (4) mathematical trend charts with explanations of why the trends are what they are, (5) tables with ancillary FSW data of interest to analysts, (6) a facility for expert modification/enhancement of the rules, and (7) a basis for conceptually convenient expansion into more complex, useful, and general rule-based systems.
REVIEW OF INDOOR EMISSION SOURCE MODELS: PART 2. PARAMETER ESTIMATION
This review consists of two sections. Part I provides an overview of 46 indoor emission source models. Part 2 (this paper) focuses on parameter estimation, a topic that is critical to modelers but has never been systematically discussed. A perfectly valid model may not be a usefu...
Samuel, Michael D.; Storm, Daniel J.; Rolley, Robert E.; Beissel, Thomas; Richards, Bryan J.; Van Deelen, Timothy R.
2014-01-01
The age structure of harvested animals provides the basis for many demographic analyses. Ages of harvested white-tailed deer (Odocoileus virginianus) and other ungulates often are estimated by evaluating replacement and wear patterns of teeth, which is subjective and error-prone. Few previous studies however, examined age- and sex-specific error rates. Counting cementum annuli of incisors is an alternative, more accurate method of estimating age, but factors that influence consistency of cementum annuli counts are poorly known. We estimated age of 1,261 adult (≥1.5 yr old) white-tailed deer harvested in Wisconsin and Illinois (USA; 2005–2008) using both wear-and-replacement and cementum annuli. We compared cementum annuli with wear-and-replacement estimates to assess misclassification rates by sex and age. Wear-and-replacement for estimating ages of white-tailed deer resulted in substantial misclassification compared with cementum annuli. Age classes of females were consistently underestimated, while those of males were underestimated for younger age classes but overestimated for older age classes. Misclassification resulted in an impression of a younger age-structure than actually was the case. Additionally, we obtained paired age-estimates from cementum annuli for 295 deer. Consistency of paired cementum annuli age-estimates decreased with age, was lower in females than males, and decreased as age estimates became less certain. Our results indicated that errors in the wear-and-replacement techniques are substantial and could impact demographic analyses that use age-structure information.
Roux, C Z
2009-05-01
Short phylogenetic distances between taxa occur, for example, in studies on ribosomal RNA-genes with slow substitution rates. For consistently short distances, it is proved that in the completely singular limit of the covariance matrix ordinary least squares (OLS) estimates are minimum variance or best linear unbiased (BLU) estimates of phylogenetic tree branch lengths. Although OLS estimates are in this situation equal to generalized least squares (GLS) estimates, the GLS chi-square likelihood ratio test will be inapplicable as it is associated with zero degrees of freedom. Consequently, an OLS normal distribution test or an analogous bootstrap approach will provide optimal branch length tests of significance for consistently short phylogenetic distances. As the asymptotic covariances between branch lengths will be equal to zero, it follows that the product rule can be used in tree evaluation to calculate an approximate simultaneous confidence probability that all interior branches are positive.
Larson, Bruce; Schnippel, Kathryn; Ndibongo, Buyiswa; Long, Lawrence; Fox, Matthew P; Rosen, Sydney
2012-01-01
Integrating POC CD4 testing technologies into HIV counseling and testing (HCT) programs may improve post-HIV testing linkage to care and treatment. As evaluations of these technologies in program settings continue, estimates of the costs of POC CD4 tests to the service provider will be needed and estimates have begun to be reported. Without a consistent and transparent methodology, estimates of the cost per CD4 test using POC technologies are likely to be difficult to compare and may lead to erroneous conclusions about costs and cost-effectiveness. This paper provides a step-by-step approach for estimating the cost per CD4 test from a provider's perspective. As an example, the approach is applied to one specific POC technology, the Pima Analyzer. The costing approach is illustrated with data from a mobile HCT program in Gauteng Province of South Africa. For this program, the cost per test in 2010 was estimated at $23.76 (material costs = $8.70; labor cost per test = $7.33; and equipment, insurance, and daily quality control = $7.72). Labor and equipment costs can vary widely depending on how the program operates and the number of CD4 tests completed over time. Additional costs not included in the above analysis, for on-going training, supervision, and quality control, are likely to increase further the cost per test. The main contribution of this paper is to outline a methodology for estimating the costs of incorporating POC CD4 testing technologies into an HCT program. The details of the program setting matter significantly for the cost estimate, so that such details should be clearly documented to improve the consistency, transparency, and comparability of cost estimates.
Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information
NASA Technical Reports Server (NTRS)
Howell, L. W.
2002-01-01
A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral parameter estimates based on the combination of data sets.
An Unscented Kalman-Particle Hybrid Filter for Space Object Tracking
NASA Astrophysics Data System (ADS)
Raihan A. V, Dilshad; Chakravorty, Suman
2018-03-01
Optimal and consistent estimation of the state of space objects is pivotal to surveillance and tracking applications. However, probabilistic estimation of space objects is made difficult by the non-Gaussianity and nonlinearity associated with orbital mechanics. In this paper, we present an unscented Kalman-particle hybrid filtering framework for recursive Bayesian estimation of space objects. The hybrid filtering scheme is designed to provide accurate and consistent estimates when measurements are sparse without incurring a large computational cost. It employs an unscented Kalman filter (UKF) for estimation when measurements are available. When the target is outside the field of view (FOV) of the sensor, it updates the state probability density function (PDF) via a sequential Monte Carlo method. The hybrid filter addresses the problem of particle depletion through a suitably designed filter transition scheme. To assess the performance of the hybrid filtering approach, we consider two test cases of space objects that are assumed to undergo full three dimensional orbital motion under the effects of J 2 and atmospheric drag perturbations. It is demonstrated that the hybrid filters can furnish fast, accurate and consistent estimates outperforming standard UKF and particle filter (PF) implementations.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program DEKFIS (discrete extended Kalman filter/smoother), formulated for aircraft and helicopter state estimation and data consistency, is described. DEKFIS is set up to pre-process raw test data by removing biases, correcting scale factor errors and providing consistency with the aircraft inertial kinematic equations. The program implements an extended Kalman filter/smoother using the Friedland-Duffy formulation.
Consistent latent position estimation and vertex classification for random dot product graphs.
Sussman, Daniel L; Tang, Minh; Priebe, Carey E
2014-01-01
In this work, we show that using the eigen-decomposition of the adjacency matrix, we can consistently estimate latent positions for random dot product graphs provided the latent positions are i.i.d. from some distribution. If class labels are observed for a number of vertices tending to infinity, then we show that the remaining vertices can be classified with error converging to Bayes optimal using the $(k)$-nearest-neighbors classification rule. We evaluate the proposed methods on simulated data and a graph derived from Wikipedia.
Statistical, economic and other tools for assessing natural aggregate
Bliss, J.D.; Moyle, P.R.; Bolm, K.S.
2003-01-01
Quantitative aggregate resource assessment provides resource estimates useful for explorationists, land managers and those who make decisions about land allocation, which may have long-term implications concerning cost and the availability of aggregate resources. Aggregate assessment needs to be systematic and consistent, yet flexible enough to allow updating without invalidating other parts of the assessment. Evaluators need to use standard or consistent aggregate classification and statistic distributions or, in other words, models with geological, geotechnical and economic variables or interrelationships between these variables. These models can be used with subjective estimates, if needed, to estimate how much aggregate may be present in a region or country using distributions generated by Monte Carlo computer simulations.
Doubly robust nonparametric inference on the average treatment effect.
Benkeser, D; Carone, M; Laan, M J Van Der; Gilbert, P B
2017-12-01
Doubly robust estimators are widely used to draw inference about the average effect of a treatment. Such estimators are consistent for the effect of interest if either one of two nuisance parameters is consistently estimated. However, if flexible, data-adaptive estimators of these nuisance parameters are used, double robustness does not readily extend to inference. We present a general theoretical study of the behaviour of doubly robust estimators of an average treatment effect when one of the nuisance parameters is inconsistently estimated. We contrast different methods for constructing such estimators and investigate the extent to which they may be modified to also allow doubly robust inference. We find that while targeted minimum loss-based estimation can be used to solve this problem very naturally, common alternative frameworks appear to be inappropriate for this purpose. We provide a theoretical study and a numerical evaluation of the alternatives considered. Our simulations highlight the need for and usefulness of these approaches in practice, while our theoretical developments have broad implications for the construction of estimators that permit doubly robust inference in other problems.
The Riso-Hudson Enneagram Type Indicator: Estimates of Reliability and Validity
ERIC Educational Resources Information Center
Newgent, Rebecca A.; Parr, Patricia E.; Newman, Isadore; Higgins, Kristin K.
2004-01-01
This investigation was conducted to estimate the reliability and validity of scores on the Riso-Hudson Enneagram Type Indicator (D. R. Riso & R. Hudson, 1999a). Results of 287 participants were analyzed. Alpha suggests an adequate degree of internal consistency. Evidence provides mixed support for construct validity using correlational and…
Procedure for estimating orbital debris risks
NASA Technical Reports Server (NTRS)
Crafts, J. L.; Lindberg, J. P.
1985-01-01
A procedure for estimating the potential orbital debris risk to the world's populace from payloads or spent stages left in orbit on future missions is presented. This approach provides a consistent, but simple, procedure to assess the risk due to random reentry with an adequate accuracy level for making programmatic decisions on planned low Earth orbit missions.
Dai, James Y.; Chan, Kwun Chuen Gary; Hsu, Li
2014-01-01
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerale work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent due to the log-linear approximation of the logistic function. Optimality of such estimators relative to the well-known two-stage least squares estimator and the double-logistic structural mean model is further discussed. PMID:24863158
Extended Kalman Doppler tracking and model determination for multi-sensor short-range radar
NASA Astrophysics Data System (ADS)
Mittermaier, Thomas J.; Siart, Uwe; Eibert, Thomas F.; Bonerz, Stefan
2016-09-01
A tracking solution for collision avoidance in industrial machine tools based on short-range millimeter-wave radar Doppler observations is presented. At the core of the tracking algorithm there is an Extended Kalman Filter (EKF) that provides dynamic estimation and localization in real-time. The underlying sensor platform consists of several homodyne continuous wave (CW) radar modules. Based on In-phase-Quadrature (IQ) processing and down-conversion, they provide only Doppler shift information about the observed target. Localization with Doppler shift estimates is a nonlinear problem that needs to be linearized before the linear KF can be applied. The accuracy of state estimation depends highly on the introduced linearization errors, the initialization and the models that represent the true physics as well as the stochastic properties. The important issue of filter consistency is addressed and an initialization procedure based on data fitting and maximum likelihood estimation is suggested. Models for both, measurement and process noise are developed. Tracking results from typical three-dimensional courses of movement at short distances in front of a multi-sensor radar platform are presented.
NETL CO 2 Storage prospeCtive Resource Estimation Excel aNalysis (CO 2-SCREEN) User's Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanguinito, Sean M.; Goodman, Angela; Levine, Jonathan
This user’s manual guides the use of the National Energy Technology Laboratory’s (NETL) CO 2 Storage prospeCtive Resource Estimation Excel aNalysis (CO 2-SCREEN) tool, which was developed to aid users screening saline formations for prospective CO 2 storage resources. CO 2- SCREEN applies U.S. Department of Energy (DOE) methods and equations for estimating prospective CO 2 storage resources for saline formations. CO2-SCREEN was developed to be substantive and user-friendly. It also provides a consistent method for calculating prospective CO 2 storage resources that allows for consistent comparison of results between different research efforts, such as the Regional Carbon Sequestration Partnershipsmore » (RCSP). CO 2-SCREEN consists of an Excel spreadsheet containing geologic inputs and outputs, linked to a GoldSim Player model that calculates prospective CO 2 storage resources via Monte Carlo simulation.« less
Estimating Effects with Rare Outcomes and High Dimensional Covariates: Knowledge is Power
Ahern, Jennifer; Galea, Sandro; van der Laan, Mark
2016-01-01
Many of the secondary outcomes in observational studies and randomized trials are rare. Methods for estimating causal effects and associations with rare outcomes, however, are limited, and this represents a missed opportunity for investigation. In this article, we construct a new targeted minimum loss-based estimator (TMLE) for the effect or association of an exposure on a rare outcome. We focus on the causal risk difference and statistical models incorporating bounds on the conditional mean of the outcome, given the exposure and measured confounders. By construction, the proposed estimator constrains the predicted outcomes to respect this model knowledge. Theoretically, this bounding provides stability and power to estimate the exposure effect. In finite sample simulations, the proposed estimator performed as well, if not better, than alternative estimators, including a propensity score matching estimator, inverse probability of treatment weighted (IPTW) estimator, augmented-IPTW and the standard TMLE algorithm. The new estimator yielded consistent estimates if either the conditional mean outcome or the propensity score was consistently estimated. As a substitution estimator, TMLE guaranteed the point estimates were within the parameter range. We applied the estimator to investigate the association between permissive neighborhood drunkenness norms and alcohol use disorder. Our results highlight the potential for double robust, semiparametric efficient estimation with rare events and high dimensional covariates. PMID:28529839
Temporal validation for landsat-based volume estimation model
Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan
2015-01-01
Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...
NASA Technical Reports Server (NTRS)
Tomberlin, T. J.
1985-01-01
Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.
Leach, A W; Mumford, J D
2008-01-01
The Pesticide Environmental Accounting (PEA) tool provides a monetary estimate of environmental and health impacts per hectare-application for any pesticide. The model combines the Environmental Impact Quotient method and a methodology for absolute estimates of external pesticide costs in UK, USA and Germany. For many countries resources are not available for intensive assessments of external pesticide costs. The model converts external costs of a pesticide in the UK, USA and Germany to Mediterranean countries. Economic and policy applications include estimating impacts of pesticide reduction policies or benefits from technologies replacing pesticides, such as sterile insect technique. The system integrates disparate data and approaches into a single logical method. The assumptions in the system provide transparency and consistency but at the cost of some specificity and precision, a reasonable trade-off for a method that provides both comparative estimates of pesticide impacts and area-based assessments of absolute impacts.
Too much ado about instrumental variable approach: is the cure worse than the disease?
Baser, Onur
2009-01-01
To review the efficacy of instrumental variable (IV) models in addressing a variety of assumption violations to ensure standard ordinary least squares (OLS) estimates are consistent. IV models gained popularity in outcomes research because of their ability to consistently estimate the average causal effects even in the presence of unmeasured confounding. However, in order for this consistent estimation to be achieved, several conditions must hold. In this article, we provide an overview of the IV approach, examine possible tests to check the prerequisite conditions, and illustrate how weak instruments may produce inconsistent and inefficient results. We use two IVs and apply Shea's partial R-square method, the Anderson canonical correlation, and Cragg-Donald tests to check for weak instruments. Hall-Peixe tests are applied to see if any of these instruments are redundant in the analysis. A total of 14,952 asthma patients from the MarketScan Commercial Claims and Encounters Database were examined in this study. Patient health care was provided under a variety of fee-for-service, fully capitated, and partially capitated health plans, including preferred provider organizations, point of service plans, indemnity plans, and health maintenance organizations. We used controller-reliever copay ratio and physician practice/prescribing patterns as an instrument. We demonstrated that the former was a weak and redundant instrument producing inconsistent and inefficient estimates of the effect of treatment. The results were worse than the results from standard regression analysis. Despite the obvious benefit of IV models, the method should not be used blindly. Several strong conditions are required for these models to work, and each of them should be tested. Otherwise, bias and precision of the results will be statistically worse than the results achieved by simply using standard OLS.
NASA Astrophysics Data System (ADS)
Oakley, David O. S.; Fisher, Donald M.; Gardner, Thomas W.; Stewart, Mary Kate
2018-01-01
Marine terraces on growing fault-propagation folds provide valuable insight into the relationship between fold kinematics and uplift rates, providing a means to distinguish among otherwise non-unique kinematic model solutions. Here, we investigate this relationship at two locations in North Canterbury, New Zealand: the Kate anticline and Haumuri Bluff, at the northern end of the Hawkswood anticline. At both locations, we calculate uplift rates of previously dated marine terraces, using DGPS surveys to estimate terrace inner edge elevations. We then use Markov chain Monte Carlo methods to fit fault-propagation fold kinematic models to structural geologic data, and we incorporate marine terrace uplift into the models as an additional constraint. At Haumuri Bluff, we find that marine terraces, when restored to originally horizontal surfaces, can help to eliminate certain trishear models that would fit the geologic data alone. At Kate anticline, we compare uplift rates at different structural positions and find that the spatial pattern of uplift rates is more consistent with trishear than with a parallel-fault propagation fold kink-band model. Finally, we use our model results to compute new estimates for fault slip rates ( 1-2 m/ka at Kate anticline and 1-4 m/ka at Haumuri Bluff) and ages of the folds ( 1 Ma), which are consistent with previous estimates for the onset of folding in this region. These results are consistent with previous work on the age of onset of folding in this region, provide revised estimates of fault slip rates necessary to understand the seismic hazard posed by these faults, and demonstrate the value of incorporating marine terraces in inverse fold kinematic models as a means to distinguish among non-unique solutions.
P-8A Poseidon Multi Mission Maritime Aircraft (P-8A)
2015-12-01
focus also includes procurement of depot and intermediate level maintenance capabilities, full scale fatigue testing, and continued integration and... Level Confidence Level of cost estimate for current APB: 50% The current APB cost estimate provided sufficient resources to execute the program under...normal conditions, encountering average levels of technical, schedule, and programmatic risk and external interference. It was consistent with
Users guide for noble fir bough cruiser.
Roger D. Fight; Keith A. Blatner; Roger C. Chapman; William E. Schlosser
2005-01-01
The bough cruiser spreadsheet was developed to provide a method for cruising noble fir (Abies procera Rehd.) stands to estimate the weight of boughs that might be harvested. No boughs are cut as part of the cruise process. The approach is based on a two-stage sample. The first stage consists of fixed-radius plots that are used to estimate the...
Complex compatible taper and volume estimation systems for red and loblolly pine
John C. Byrne; David D. Reed
1986-01-01
Five equation systems are described which can be used to estimate upper stem diameter, total individual tree cubic-foot volume, and merchantable cubic-foot volumes to any merchantability imit (expressed in terms of diameter or height), both inside and outside bark. The equations provide consistent results since they are mathematically related and are fit using stem...
Cataloging the 1811-1812 New Madrid, central U.S., earthquake sequence
Hough, S.E.
2009-01-01
The three principal New Madrid, central U.S., mainshocks of 1811-1812 were followed by extensive aftershock sequences that included numerous felt events. Although no instrumental data are available for the sequence, historical accounts provide information that can be used to estimate magnitudes and locations for the large aftershocks as well as the mainshocks. Several detailed eyewitness accounts of the sequence provide sufficient information to identify times and rough magnitude estimates for a number of aftershocks that have not been analyzed previously. I also use three extended compilations of felt events to explore the overall sequence productivity. Although one generally cannot estimate magnitudes or locations for individual events, the intensity distributions of recent, instrumentally recorded earthquakes in the region provide a basis for estimation of the magnitude distribution of 1811-1812 aftershocks. The distribution is consistent with a b-value distribution. I estimate Mw 6-6.3 for the three largest identifiable aftershocks, apart from the so-called dawn aftershock on 16 December 1811.
Atmospheric Turbulence Estimates from a Pulsed Lidar
NASA Technical Reports Server (NTRS)
Pruis, Matthew J.; Delisi, Donald P.; Ahmad, Nash'at N.; Proctor, Fred H.
2013-01-01
Estimates of the eddy dissipation rate (EDR) were obtained from measurements made by a coherent pulsed lidar and compared with estimates from mesoscale model simulations and measurements from an in situ sonic anemometer at the Denver International Airport and with EDR estimates from the last observation time of the trailing vortex pair. The estimates of EDR from the lidar were obtained using two different methodologies. The two methodologies show consistent estimates of the vertical profiles. Comparison of EDR derived from the Weather Research and Forecast (WRF) mesoscale model with the in situ lidar estimates show good agreement during the daytime convective boundary layer, but the WRF simulations tend to overestimate EDR during the nighttime. The EDR estimates from a sonic anemometer located at 7.3 meters above ground level are approximately one order of magnitude greater than both the WRF and lidar estimates - which are from greater heights - during the daytime convective boundary layer and substantially greater during the nighttime stable boundary layer. The consistency of the EDR estimates from different methods suggests a reasonable ability to predict the temporal evolution of a spatially averaged vertical profile of EDR in an airport terminal area using a mesoscale model during the daytime convective boundary layer. In the stable nighttime boundary layer, there may be added value to EDR estimates provided by in situ lidar measurements.
Mourning dove population trend estimates from Call-Count and North American Breeding Bird Surveys
Sauer, J.R.; Dolton, D.D.; Droege, S.
1994-01-01
The mourning dove (Zenaida macroura) Callcount Survey and the North American Breeding Bird Survey provide information on population trends of mourning doves throughout the continental United States. Because surveys are an integral part of the development of hunting regulations, a need exists to determine which survey provides precise information. We estimated population trends from 1966 to 1988 by state and dove management unit, and assessed the relative efficiency of each survey. Estimates of population trend differ (P lt 0.05) between surveys in 11 of 48 states; 9 of 11 states with divergent results occur in the Eastern Management Unit. Differences were probably a consequence of smaller sample sizes in the Callcount Survey. The Breeding Bird Survey generally provided trend estimates with smaller variances than did the Callcount Survey. Although the Callcount Survey probably provides more withinroute accuracy because of survey methods and timing, the Breeding Bird Survey has a larger sample size of survey routes and greater consistency of coverage in the Eastern Unit.
Estimating means and variances: The comparative efficiency of composite and grab samples.
Brumelle, S; Nemetz, P; Casey, D
1984-03-01
This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.
Plate motions and deformations from geologic and geodetic data
NASA Technical Reports Server (NTRS)
Jordan, T. H.
1986-01-01
Research effort on behalf of the Crustal Dynamics Project focused on the development of methodologies suitable for the analysis of space-geodetic data sets for the estimation of crustal motions, in conjunction with results derived from land-based geodetic data, neo-tectonic studies, and other geophysical data. These methodologies were used to provide estimates of both global plate motions and intraplate deformation in the western U.S. Results from the satellite ranging experiment for the rate of change of the baseline length between San Diego and Quincy, California indicated that relative motion between the North American and Pacific plates over the course of the observing period during 1972 to 1982 were consistent with estimates calculated from geologic data averaged over the past few million years. This result, when combined with other kinematic constraints on western U.S. deformation derived from land-based geodesy, neo-tectonic studies, and other geophysical data, places limits on the possible extension of the Basin and Range province, and implies significant deformation is occurring west of the San Andreas fault. A new methodology was developed to analyze vector-position space-geodetic data to provide estimates of relative vector motions of the observing sites. The algorithm is suitable for the reduction of large, inhomogeneous data sets, and takes into account the full position covariances, errors due to poorly resolved Earth orientation parameters and vertical positions, and reduces baises due to inhomogeneous sampling of the data. This methodology was applied to the problem of estimating the rate-scaling parameter of a global plate tectonic model using satellite laser ranging observations over a five-year interval. The results indicate that the mean rate of global plate motions for that interval are consistent with those averaged over several million years, and are not consistent with quiescent or greatly accelerated plate motions. This methodology was also used to provide constraints on deformation in the western U.S. using very long baseline interferometry observations over a two-year period.
Sinner, Jim; Ellis, Joanne; Kandlikar, Milind; Halpern, Benjamin S.; Satterfield, Terre; Chan, Kai
2017-01-01
The elicitation of expert judgment is an important tool for assessment of risks and impacts in environmental management contexts, and especially important as decision-makers face novel challenges where prior empirical research is lacking or insufficient. Evidence-driven elicitation approaches typically involve techniques to derive more accurate probability distributions under fairly specific contexts. Experts are, however, prone to overconfidence in their judgements. Group elicitations with diverse experts can reduce expert overconfidence by allowing cross-examination and reassessment of prior judgements, but groups are also prone to uncritical “groupthink” errors. When the problem context is underspecified the probability that experts commit groupthink errors may increase. This study addresses how structured workshops affect expert variability among and certainty within responses in a New Zealand case study. We find that experts’ risk estimates before and after a workshop differ, and that group elicitations provided greater consistency of estimates, yet also greater uncertainty among experts, when addressing prominent impacts to four different ecosystem services in coastal New Zealand. After group workshops, experts provided more consistent ranking of risks and more consistent best estimates of impact through increased clarity in terminology and dampening of extreme positions, yet probability distributions for impacts widened. The results from this case study suggest that group elicitations have favorable consequences for the quality and uncertainty of risk judgments within and across experts, making group elicitation techniques invaluable tools in contexts of limited data. PMID:28767694
Spacecraft mass estimation, relationships and engine data: Task 1.1 of the lunar base systems study
NASA Technical Reports Server (NTRS)
1988-01-01
A collection of scaling equations, weight statements, scaling factors, etc., useful for doing conceptual designs of spacecraft are given. Rules of thumb and methods of calculating quantities of interest are provided. Basic relationships for conventional, and several non-conventional, propulsion systems (nuclear, solar electric and solar thermal) are included. The equations and other data were taken from a number of sources and are not at all consistent with each other in level of detail or method, but provide useful references for early estimation purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-05-01
The Transient Reactor Analysis Code (TRAC) is being developed at the Los Alamos Scientific Laboratory (LASL) to provide an advanced ''best estimate'' predictive capability for the analysis of postulated accidents in light water reactors (LWRs). TRAC-Pl provides this analysis capability for pressurized water reactors (PWRs) and for a wide variety of thermal-hydraulic experimental facilities. It features a three-dimensional treatment of the pressure vessel and associated internals; two-phase nonequilibrium hydrodynamics models; flow-regime-dependent constitutive equation treatment; reflood tracking capability for both bottom flood and falling film quench fronts; and consistent treatment of entire accident sequences including the generation of consistent initial conditions.more » The TRAC-Pl User's Manual is composed of two separate volumes. Volume I gives a description of the thermal-hydraulic models and numerical solution methods used in the code. Detailed programming and user information is also provided. Volume II presents the results of the developmental verification calculations.« less
A 3D simulation look-up library for real-time airborne gamma-ray spectroscopy
NASA Astrophysics Data System (ADS)
Kulisek, Jonathan A.; Wittman, Richard S.; Miller, Erin A.; Kernan, Warnick J.; McCall, Jonathon D.; McConn, Ron J.; Schweppe, John E.; Seifert, Carolyn E.; Stave, Sean C.; Stewart, Trevor N.
2018-01-01
A three-dimensional look-up library consisting of simulated gamma-ray spectra was developed to leverage, in real-time, the abundance of data provided by a helicopter-mounted gamma-ray detection system consisting of 92 CsI-based radiation sensors and exhibiting a highly angular-dependent response. We have demonstrated how this library can be used to help effectively estimate the terrestrial gamma-ray background, develop simulated flight scenarios, and to localize radiological sources. Source localization accuracy was significantly improved, particularly for weak sources, by estimating the entire gamma-ray spectra while accounting for scattering in the air, and especially off the ground.
The forest inventory and analysis database description and users manual version 1.0
Patrick D. Miles; Gary J. Brand; Carol L. Alerich; Larry F. Bednar; Sharon W. Woudenberg; Joseph F. Glover; Edward N. Ezell
2001-01-01
Describes the structure of the Forest Inventory and Analysis Database (FIADB) and provides information on generating estimates of forest statistics from these data. The FIADB structure provides a consistent framework for storing forest inventory data across all ownerships across the entire United States. These data are available to the public.
Doubova, Svetlana V; Ramírez-Sánchez, Claudine; Figueroa-Lara, Alejandro; Pérez-Cuevas, Ricardo
2013-12-01
To estimate the requirements of human resources (HR) of two models of care for diabetes patients: conventional and specific, also called DiabetIMSS, which are provided in primary care clinics of the Mexican Institute of Social Security (IMSS). An evaluative research was conducted. An expert group identified the HR activities and time required to provide healthcare consistent with the best clinical practices for diabetic patients. HR were estimated by using the evidence-based adjusted service target approach for health workforce planning; then, comparisons between existing and estimated HRs were made. To provide healthcare in accordance with the patients' metabolic control, the conventional model required increasing the number of family doctors (1.2 times) nutritionists (4.2 times) and social workers (4.1 times). The DiabetIMSS model requires greater increase than the conventional model. Increasing HR is required to provide evidence-based healthcare to diabetes patients.
Ponterotto, Joseph G; Ruckdeschel, Daniel E
2007-12-01
The present article addresses issues in reliability assessment that are often neglected in psychological research such as acceptable levels of internal consistency for research purposes, factors affecting the magnitude of coefficient alpha (alpha), and considerations for interpreting alpha within the research context. A new reliability matrix anchored in classical test theory is introduced to help researchers judge adequacy of internal consistency coefficients with research measures. Guidelines and cautions in applying the matrix are provided.
Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas
2014-07-01
Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, J. G.
While the well-known Voigt and Reuss (VR) bounds, and the Voigt-Reuss-Hill (VRH) elastic constant estimators for random polycrystals are all straightforwardly calculated once the elastic constants of anisotropic crystals are known, the Hashin-Shtrikman (HS) bounds and related self-consistent (SC) estimators for the same constants are, by comparison, more difficult to compute. Recent work has shown how to simplify (to some extent) these harder to compute HS bounds and SC estimators. An overview and analysis of a subsampling of these results is presented here with the main point being to show whether or not this extra work (i.e., in calculating bothmore » the HS bounds and the SC estimates) does provide added value since, in particular, the VRH estimators often do not fall within the HS bounds, while the SC estimators (for good reasons) have always been found to do so. The quantitative differences between the SC and the VRH estimators in the eight cases considered are often quite small however, being on the order of ±1%. These quantitative results hold true even though these polycrystal Voigt-Reuss-Hill estimators more typically (but not always) fall outside the Hashin-Shtrikman bounds, while the self-consistent estimators always fall inside (or on the boundaries of) these same bounds.« less
Aeroservoelastic Uncertainty Model Identification from Flight Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
2001-01-01
Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.
Finite mixture model: A maximum likelihood estimation approach on time series data
NASA Astrophysics Data System (ADS)
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Magmatic oxygen fugacity estimated using zircon-melt partitioning of cerium
NASA Astrophysics Data System (ADS)
Smythe, Duane J.; Brenan, James M.
2016-11-01
Using a newly-calibrated relation for cerium redox equilibria in silicate melts (Smythe and Brenan, 2015), and an internally-consistent model for zircon-melt partitioning of Ce, we provide a method to estimate the prevailing redox conditions during crystallization of zircon-saturated magmas. With this approach, oxygen fugacities were calculated for samples from the Bishop tuff (USA), Toba tuff (Indonesia) and the Nain plutonic suite (Canada), which typically agree with independent estimates within one log unit or better. With the success of reproducing the fO2 of well-constrained igneous systems, we have applied our Ce-in-zircon oxygen barometer to estimating the redox state of Earth's earliest magmas. Using the composition of the Jack Hills Hadean zircons, combined with estimates of their parental magma composition, we determined the fO2 during zircon crystallization to be between FMQ -1.0 to +2.5 (where FMQ is the fayalite-magnetite-quartz buffer). Of the parental magmas considered, Archean tonalite-trondhjemite-granodiorite (TTG) compositions yield zircon-melt partitioning most similar to well-constrained modern suites (e.g., Sano et al., 2002). Although broadly consistent with previous redox estimates from the Jack Hills zircons, our results provide a more precise determination of fO2, narrowing the range for Hadean parental magmas by more than 8 orders of magnitude. Results suggest that relatively oxidized magmatic source regions, similar in oxidation state to that of 3.5 Ga komatiite suites, existed by ∼4.4 Ga.
Disease Heritability Inferred from Familial Relationships Reported in Medical Records.
Polubriaginof, Fernanda C G; Vanguri, Rami; Quinnies, Kayla; Belbin, Gillian M; Yahi, Alexandre; Salmasian, Hojjat; Lorberbaum, Tal; Nwankwo, Victor; Li, Li; Shervey, Mark M; Glowe, Patricia; Ionita-Laza, Iuliana; Simmerling, Mary; Hripcsak, George; Bakken, Suzanne; Goldstein, David; Kiryluk, Krzysztof; Kenny, Eimear E; Dudley, Joel; Vawdrey, David K; Tatonetti, Nicholas P
2018-05-15
Heritability is essential for understanding the biological causes of disease but requires laborious patient recruitment and phenotype ascertainment. Electronic health records (EHRs) passively capture a wide range of clinically relevant data and provide a resource for studying the heritability of traits that are not typically accessible. EHRs contain next-of-kin information collected via patient emergency contact forms, but until now, these data have gone unused in research. We mined emergency contact data at three academic medical centers and identified 7.4 million familial relationships while maintaining patient privacy. Identified relationships were consistent with genetically derived relatedness. We used EHR data to compute heritability estimates for 500 disease phenotypes. Overall, estimates were consistent with the literature and between sites. Inconsistencies were indicative of limitations and opportunities unique to EHR research. These analyses provide a validation of the use of EHRs for genetics and disease research. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
French, V. (Principal Investigator)
1982-01-01
An evaluation was made of Thompson-Type models which use trend terms (as a surrogate for technology), meteorological variables based on monthly average temperature, and total precipitation to forecast and estimate corn yields in Iowa, Illinois, and Indiana. Pooled and unpooled Thompson-type models were compared. Neither was found to be consistently superior to the other. Yield reliability indicators show that the models are of limited use for large area yield estimation. The models are objective and consistent with scientific knowledge. Timely yield forecasts and estimates can be made during the growing season by using normals or long range weather forecasts. The models are not costly to operate and are easy to use and understand. The model standard errors of prediction do not provide a useful current measure of modeled yield reliability.
Multispectral scanner system parameter study and analysis software system description, volume 2
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.
1978-01-01
The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
The global prevalence of common mental disorders: a systematic review and meta-analysis 1980–2013
Steel, Zachary; Marnane, Claire; Iranpour, Changiz; Chey, Tien; Jackson, John W; Patel, Vikram; Silove, Derrick
2014-01-01
Background: Since the introduction of specified diagnostic criteria for mental disorders in the 1970s, there has been a rapid expansion in the number of large-scale mental health surveys providing population estimates of the combined prevalence of common mental disorders (most commonly involving mood, anxiety and substance use disorders). In this study we undertake a systematic review and meta-analysis of this literature. Methods: We applied an optimized search strategy across the Medline, PsycINFO, EMBASE and PubMed databases, supplemented by hand searching to identify relevant surveys. We identified 174 surveys across 63 countries providing period prevalence estimates (155 surveys) and lifetime prevalence estimates (85 surveys). Random effects meta-analysis was undertaken on logit-transformed prevalence rates to calculate pooled prevalence estimates, stratified according to methodological and substantive groupings. Results: Pooling across all studies, approximately 1 in 5 respondents (17.6%, 95% confidence interval:16.3–18.9%) were identified as meeting criteria for a common mental disorder during the 12-months preceding assessment; 29.2% (25.9–32.6%) of respondents were identified as having experienced a common mental disorder at some time during their lifetimes. A consistent gender effect in the prevalence of common mental disorder was evident; women having higher rates of mood (7.3%:4.0%) and anxiety (8.7%:4.3%) disorders during the previous 12 months and men having higher rates of substance use disorders (2.0%:7.5%), with a similar pattern for lifetime prevalence. There was also evidence of consistent regional variation in the prevalence of common mental disorder. Countries within North and South East Asia in particular displayed consistently lower one-year and lifetime prevalence estimates than other regions. One-year prevalence rates were also low among Sub-Saharan-Africa, whereas English speaking counties returned the highest lifetime prevalence estimates. Conclusions: Despite a substantial degree of inter-survey heterogeneity in the meta-analysis, the findings confirm that common mental disorders are highly prevalent globally, affecting people across all regions of the world. This research provides an important resource for modelling population needs based on global regional estimates of mental disorder. The reasons for regional variation in mental disorder require further investigation. PMID:24648481
Assessment of the Accountability of Night Vision Devices Provided to the Security Forces of Iraq
2009-03-17
of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other... data in this project. The qualitative data consisted of individual interviews, direct observation, and written documents. Quantitative data
NASA Astrophysics Data System (ADS)
Hirai, Kenta; Mita, Akira
2016-04-01
Because of social background, such as repeated large earthquakes and cheating in design and construction, structural health monitoring (SHM) systems are getting strong attention. The SHM systems are in a practical phase. An SHM system consisting of small number of sensors has been introduced to 6 tall buildings in Shinjuku area. Including them, there are 2 major issues in the SHM systems consisting of small number of sensors. First, optimal system number of sensors and the location are not well-defined. In the practice, system placement is determined based on rough prediction and experience. Second, there are some uncertainties in estimation results by the SHM systems. Thus, the purpose of this research is to provide useful information for increasing reliability of SHM system and to improve estimation results based on uncertainty analysis of the SHM systems. The important damage index used here is the inter-story drift angle. The uncertainty considered here are number of sensors, earthquake motion characteristics, noise in data, error between numerical model and real building, nonlinearity of parameter. Then I have analyzed influence of each factor to estimation accuracy. The analysis conducted here will help to decide sensor system design considering valance of cost and accuracy. Because of constraint on the number of sensors, estimation results by the SHM system has tendency to provide smaller values. To overcome this problem, a compensation algorithm was discussed and presented. The usefulness of this compensation method was demonstrated for 40 story S and RC building models with nonlinear response.
The 2002 RPA Plot Summary database users manual
Patrick D. Miles; John S. Vissage; W. Brad Smith
2004-01-01
Describes the structure of the RPA 2002 Plot Summary database and provides information on generating estimates of forest statistics from these data. The RPA 2002 Plot Summary database provides a consistent framework for storing forest inventory data across all ownerships across the entire United States. The data represents the best available data as of October 2001....
Modeling regional-scale wildland fire emissions with the wildland fire emissions information system
Nancy H.F. French; Donald McKenzie; Tyler Erickson; Benjamin Koziol; Michael Billmire; K. Endsley; Naomi K.Y. Scheinerman; Liza Jenkins; Mary E. Miller; Roger Ottmar; Susan Prichard
2014-01-01
As carbon modeling tools become more comprehensive, spatial data are needed to improve quantitative maps of carbon emissions from fire. The Wildland Fire Emissions Information System (WFEIS) provides mapped estimates of carbon emissions from historical forest fires in the United States through a web browser. WFEIS improves access to data and provides a consistent...
Turbulent stresses in the surf-zone: Which way is up?
Haines, John W.; Gelfenbaum, Guy; Edge, B.L
1997-01-01
Velocity observations from a vertical stack of three-component Acoustic Doppler Velocimeters (ADVs) within the energetic surf-zone are presented. Rapid temporal sampling and small sampling volume provide observations suitable for investigation of the role of turbulent fluctuations in surf-zone dynamics. While sensor performance was good, failure to recover reliable measures of tilt from the vertical compromise the data value. We will present some cursory observations supporting the ADV performance, and examine the sensitivity of stress estimates to uncertainty in the sensor orientation. It is well known that turbulent stress estimates are highly sensitive to orientation relative to vertical when wave motions are dominant. Analyses presented examine the potential to use observed flow-field characteristics to constrain sensor orientation. Results show that such an approach may provide a consistent orientation to a fraction of a degree, but the inherent sensitivity of stress estimates requires a still more restrictive constraint. Regardless, the observations indicate the degree to which stress estimates are dependent on orientation, and provide some indication of the temporal variability in time-averaged stress estimates.
The MSFC Solar Activity Future Estimation (MSAFE) Model
NASA Technical Reports Server (NTRS)
Suggs, Ron
2017-01-01
The Natural Environments Branch of the Engineering Directorate at Marshall Space Flight Center (MSFC) provides solar cycle forecasts for NASA space flight programs and the aerospace community. These forecasts provide future statistical estimates of sunspot number, solar radio 10.7 cm flux (F10.7), and the geomagnetic planetary index, Ap, for input to various space environment models. For example, many thermosphere density computer models used in spacecraft operations, orbital lifetime analysis, and the planning of future spacecraft missions require as inputs the F10.7 and Ap. The solar forecast is updated each month by executing MSAFE using historical and the latest month's observed solar indices to provide estimates for the balance of the current solar cycle. The forecasted solar indices represent the 13-month smoothed values consisting of a best estimate value stated as a 50 percentile value along with approximate +/- 2 sigma values stated as 95 and 5 percentile statistical values. This presentation will give an overview of the MSAFE model and the forecast for the current solar cycle.
State energy data report 1996: Consumption estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The State Energy Data Report (SEDR) provides annual time series estimates of State-level energy consumption by major economic sectors. The estimates are developed in the Combined State Energy Data System (CSEDS), which is maintained and operated by the Energy Information Administration (EIA). The goal in maintaining CSEDS is to create historical time series of energy consumption by State that are defined as consistently as possible over time and across sectors. CSEDS exists for two principal reasons: (1) to provide State energy consumption estimates to Members of Congress, Federal and State agencies, and the general public and (2) to provide themore » historical series necessary for EIA`s energy models. To the degree possible, energy consumption has been assigned to five sectors: residential, commercial, industrial, transportation, and electric utility sectors. Fuels covered are coal, natural gas, petroleum, nuclear electric power, hydroelectric power, biomass, and other, defined as electric power generated from geothermal, wind, photovoltaic, and solar thermal energy. 322 tabs.« less
NASA Astrophysics Data System (ADS)
Neher, Christopher; Duffield, John; Patterson, David
2013-09-01
The National Park Service (NPS) currently manages a large and diverse system of park units nationwide which received an estimated 279 million recreational visits in 2011. This article uses park visitor data collected by the NPS Visitor Services Project to estimate a consistent set of count data travel cost models of park visitor willingness to pay (WTP). Models were estimated using 58 different park unit survey datasets. WTP estimates for these 58 park surveys were used within a meta-regression analysis model to predict average and total WTP for NPS recreational visitation system-wide. Estimated WTP per NPS visit in 2011 averaged 102 system-wide, and ranged across park units from 67 to 288. Total 2011 visitor WTP for the NPS system is estimated at 28.5 billion with a 95% confidence interval of 19.7-43.1 billion. The estimation of a meta-regression model using consistently collected data and identical specification of visitor WTP models greatly reduces problems common to meta-regression models, including sample selection bias, primary data heterogeneity, and heteroskedasticity, as well as some aspects of panel effects. The article provides the first estimate of total annual NPS visitor WTP within the literature directly based on NPS visitor survey data.
CMB bispectrum, trispectrum, non-Gaussianity, and the Cramer-Rao bound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamionkowski, Marc; Smith, Tristan L.; Heavens, Alan
Minimum-variance estimators for the parameter f{sub nl} that quantifies local-model non-Gaussianity can be constructed from the cosmic microwave background (CMB) bispectrum (three-point function) and also from the trispectrum (four-point function). Some have suggested that a comparison between the estimates for the values of f{sub nl} from the bispectrum and trispectrum allow a consistency test for the model. But others argue that the saturation of the Cramer-Rao bound--which gives a lower limit to the variance of an estimator--by the bispectrum estimator implies that no further information on f{sub nl} can be obtained from the trispectrum. Here, we elaborate the nature ofmore » the correlation between the bispectrum and trispectrum estimators for f{sub nl}. We show that the two estimators become statistically independent in the limit of large number of CMB pixels, and thus that the trispectrum estimator does indeed provide additional information on f{sub nl} beyond that obtained from the bispectrum. We explain how this conclusion is consistent with the Cramer-Rao bound. Our discussion of the Cramer-Rao bound may be of interest to those doing Fisher-matrix parameter-estimation forecasts or data analysis in other areas of physics as well.« less
Statistical analysis of the determinations of the Sun's Galactocentric distance
NASA Astrophysics Data System (ADS)
Malkin, Zinovy
2013-02-01
Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.
Contraceptive failure in the United States
Trussell, James
2013-01-01
This review provides an update of previous estimates of first-year probabilities of contraceptive failure for all methods of contraception available in the United States. Estimates are provided of probabilities of failure during typical use (which includes both incorrect and inconsistent use) and during perfect use (correct and consistent use). The difference between these two probabilities reveals the consequences of imperfect use; it depends both on how unforgiving of imperfect use a method is and on how hard it is to use that method perfectly. These revisions reflect new research on contraceptive failure both during perfect use and during typical use. PMID:21477680
Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A
2014-04-01
Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.
Recovering area-to-mass ratio of resident space objects through data mining
NASA Astrophysics Data System (ADS)
Peng, Hao; Bai, Xiaoli
2018-01-01
The area-to-mass ratio (AMR) of a resident space object (RSO) is an important parameter for improved space situation awareness capability due to its effect on the non-conservative forces including the atmosphere drag force and the solar radiation pressure force. However, information about AMR is often not provided in most space catalogs. The present paper investigates recovering the AMR information from the consistency error, which refers to the difference between the orbit predicted from an earlier estimate and the orbit estimated at the current epoch. A data mining technique, particularly the random forest (RF) method, is used to discover the relationship between the consistency error and the AMR. Using a simulation-based space catalog environment as the testbed, this paper demonstrates that the classification RF model can determine the RSO's category AMR and the regression RF model can generate continuous AMR values, both with good accuracies. Furthermore, the paper reveals that by recording additional information besides the consistency error, the RF model can estimate the AMR with even higher accuracy.
Turner, Alan H; Pritchard, Adam C; Matzke, Nicholas J
2017-01-01
Estimating divergence times on phylogenies is critical in paleontological and neontological studies. Chronostratigraphically-constrained fossils are the only direct evidence of absolute timing of species divergence. Strict temporal calibration of fossil-only phylogenies provides minimum divergence estimates, and various methods have been proposed to estimate divergences beyond these minimum values. We explore the utility of simultaneous estimation of tree topology and divergence times using BEAST tip-dating on datasets consisting only of fossils by using relaxed morphological clocks and birth-death tree priors that include serial sampling (BDSS) at a constant rate through time. We compare BEAST results to those from the traditional maximum parsimony (MP) and undated Bayesian inference (BI) methods. Three overlapping datasets were used that span 250 million years of archosauromorph evolution leading to crocodylians. The first dataset focuses on early Sauria (31 taxa, 240 chars.), the second on early Archosauria (76 taxa, 400 chars.) and the third on Crocodyliformes (101 taxa, 340 chars.). For each dataset three time-calibrated trees (timetrees) were calculated: a minimum-age timetree with node ages based on earliest occurrences in the fossil record; a 'smoothed' timetree using a range of time added to the root that is then averaged over zero-length internodes; and a tip-dated timetree. Comparisons within datasets show that the smoothed and tip-dated timetrees provide similar estimates. Only near the root node do BEAST estimates fall outside the smoothed timetree range. The BEAST model is not able to overcome limited sampling to correctly estimate divergences considerably older than sampled fossil occurrence dates. Conversely, the smoothed timetrees consistently provide node-ages far older than the strict dates or BEAST estimates for morphologically conservative sister-taxa when they sit on long ghost lineages. In this latter case, the relaxed-clock model appears to be correctly moderating the node-age estimate based on the limited morphological divergence. Topologies are generally similar across analyses, but BEAST trees for crocodyliforms differ when clades are deeply nested but contain very old taxa. It appears that the constant-rate sampling assumption of the BDSS tree prior influences topology inference by disfavoring long, unsampled branches.
Turner, Alan H.; Pritchard, Adam C.; Matzke, Nicholas J.
2017-01-01
Estimating divergence times on phylogenies is critical in paleontological and neontological studies. Chronostratigraphically-constrained fossils are the only direct evidence of absolute timing of species divergence. Strict temporal calibration of fossil-only phylogenies provides minimum divergence estimates, and various methods have been proposed to estimate divergences beyond these minimum values. We explore the utility of simultaneous estimation of tree topology and divergence times using BEAST tip-dating on datasets consisting only of fossils by using relaxed morphological clocks and birth-death tree priors that include serial sampling (BDSS) at a constant rate through time. We compare BEAST results to those from the traditional maximum parsimony (MP) and undated Bayesian inference (BI) methods. Three overlapping datasets were used that span 250 million years of archosauromorph evolution leading to crocodylians. The first dataset focuses on early Sauria (31 taxa, 240 chars.), the second on early Archosauria (76 taxa, 400 chars.) and the third on Crocodyliformes (101 taxa, 340 chars.). For each dataset three time-calibrated trees (timetrees) were calculated: a minimum-age timetree with node ages based on earliest occurrences in the fossil record; a ‘smoothed’ timetree using a range of time added to the root that is then averaged over zero-length internodes; and a tip-dated timetree. Comparisons within datasets show that the smoothed and tip-dated timetrees provide similar estimates. Only near the root node do BEAST estimates fall outside the smoothed timetree range. The BEAST model is not able to overcome limited sampling to correctly estimate divergences considerably older than sampled fossil occurrence dates. Conversely, the smoothed timetrees consistently provide node-ages far older than the strict dates or BEAST estimates for morphologically conservative sister-taxa when they sit on long ghost lineages. In this latter case, the relaxed-clock model appears to be correctly moderating the node-age estimate based on the limited morphological divergence. Topologies are generally similar across analyses, but BEAST trees for crocodyliforms differ when clades are deeply nested but contain very old taxa. It appears that the constant-rate sampling assumption of the BDSS tree prior influences topology inference by disfavoring long, unsampled branches. PMID:28187191
Estimating post-marketing exposure to pharmaceutical products using ex-factory distribution data.
Telfair, Tamara; Mohan, Aparna K; Shahani, Shalini; Klincewicz, Stephen; Atsma, Willem Jan; Thomas, Adrian; Fife, Daniel
2006-10-01
The pharmaceutical industry has an obligation to identify adverse reactions to drug products during all phases of drug development, including the post-marketing period. Estimates of population exposure to pharmaceutical products are important to the post-marketing surveillance of drugs, and provide a context for assessing the various risks and benefits, including drug safety, associated with drug treatment. This paper describes a systematic approach to estimating post-marketing drug exposure using ex-factory shipment data to estimate the quantity of medication available, and dosage information (stratified by indication or other factors as appropriate) to convert the quantity of medication to person time of exposure. Unlike the non-standardized methods often used to estimate exposure, this approach provides estimates whose calculations are explicit, documented, and consistent across products and over time. The methods can readily be carried out by an individual or small group specializing in this function, and lend themselves to automation. The present estimation approach is practical and relatively uncomplicated to implement. We believe it is a useful innovation. Copyright 2006 John Wiley & Sons, Ltd.
Parameter estimation accuracies of Galactic binaries with eLISA
NASA Astrophysics Data System (ADS)
Błaut, Arkadiusz
2018-09-01
We study parameter estimation accuracy of nearly monochromatic sources of gravitational waves with the future eLISA-like detectors. eLISA will be capable of observing millions of such signals generated by orbiting pairs of compact binaries consisting of white dwarf, neutron star or black hole and to resolve and estimate parameters of several thousands of them providing crucial information regarding their orbital dynamics, formation rates and evolutionary paths. Using the Fisher matrix analysis we compare accuracies of the estimated parameters for different mission designs defined by the GOAT advisory team established to asses the scientific capabilities and the technological issues of the eLISA-like missions.
Updating CMAQ secondary organic aerosol properties relevant for aerosol water interactions
Properties of secondary organic aerosol (SOA) compounds in CMAQ are updated with state-of-the-science estimates from structure activity relationships to provide consistency among volatility, molecular weight, degree of oxygenation, and solubility/hygroscopicity. These updated pro...
Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen
2014-01-01
Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.
Diurnal and Reproductive Stage-Dependent Variation of Parental Behaviour in Captive Zebra Finches
Morvai, Boglárka; Nanuru, Sabine; Mul, Douwe; Kusche, Nina; Milne, Gregory; Székely, Tamás; Komdeur, Jan; Miklósi, Ádám
2016-01-01
Parental care plays a key role in ontogeny, life-history trade-offs, sexual selection and intra-familial conflict. Studies focusing on understanding causes and consequences of variation in parental effort need to quantify parental behaviour accurately. The applied methods are, however, diverse even for a given species and type of parental effort, and rarely validated for accuracy. Here we focus on variability of parental behaviour from a methodological perspective to investigate the effect of different samplings on various estimates of parental effort. We used nest box cameras in a captive breeding population of zebra finches, Taeniopygia guttata, a widely used model system of sexual selection, intra-familial dynamics and parental care. We investigated diurnal and reproductive stage-dependent variation in parental effort (including incubation, brooding, nest attendance and number of feedings) based on 12h and 3h continuous video-recordings taken at various reproductive stages. We then investigated whether shorter (1h) sampling periods provided comparable estimates of overall parental effort and division of labour to those of longer (3h) sampling periods. Our study confirmed female-biased division of labour during incubation, and showed that the difference between female and male effort diminishes with advancing reproductive stage. We found individually consistent parental behaviours within given days of incubation and nestling provisioning. Furthermore, parental behaviour was consistent over the different stages of incubation, however, only female brooding was consistent over nestling provisioning. Parental effort during incubation did not predict parental effort during nestling provisioning. Our analyses revealed that 1h sampling may be influenced heavily by stochastic and diurnal variation. We suggest using a single longer sampling period (3h) may provide a consistent and accurate estimate for overall parental effort during incubation in zebra finches. Due to the large within-individual variation, we suggest repeated longer sampling over the reproductive stage may be necessary for accurate estimates of parental effort post-hatching. PMID:27973549
Test-Retest Analyses of the Test of English as a Foreign Language. TOEFL Research Reports Report 45.
ERIC Educational Resources Information Center
Henning, Grant
This study provides information about the total and component scores of the Test of English as a Foreign Language (TOEFL). First, the study provides comparative global and component estimates of test-retest, alternate-form, and internal-consistency reliability, controlling for sources of measurement error inherent in the examinees and the testing…
Valuing improved wetland quality using choice modeling
NASA Astrophysics Data System (ADS)
Morrison, Mark; Bennett, Jeff; Blamey, Russell
1999-09-01
The main stated preference technique used for estimating environmental values is the contingent valuation method. In this paper the results of an application of an alternative technique, choice modeling, are reported. Choice modeling has been developed in the marketing and transport applications but has only been used in a handful of environmental applications, most of which have focused on use values. The case study presented here involves the estimation of the nonuse environmental values provided by the Macquarie Marshes, a major wetland in New South Wales, Australia. Estimates of the nonuse value the community places on preventing job losses are also presented. The reported models are robust, having high explanatory power and variables that are statistically significant and consistent with expectations. These results provide support for the hypothesis that choice modeling can be used to estimate nonuse values for both environmental and social consequences of resource use changes.
Paoli, Carly J.; Hays, Ron D.; Taylor-Stokes, Gavin; Piercy, James; Gitlin, Matthew
2014-01-01
Background and objectives The US Centers for Medicare and Medicaid Services (CMS) End Stage Renal Disease Prospective Payment System and Quality Incentive Program requires that dialysis centers meet predefined criteria for quality of patient care to ensure future funding. The CMS selected the Consumer Assessment of Healthcare Providers and Systems In-Center Hemodialysis (CAHPS-ICH) survey for the assessment of patient experience of care. This analysis evaluated the psychometric properties of the CAHPS-ICH survey in a sample of hemodialysis patients. Design, setting, participants, & measurements Data were drawn from the Adelphi CKD Disease Specific Program (a retrospective, cross-sectional survey of nephrologists and patients). Selected United States–based nephrologists treating patients receiving hemodialysis completed patient record forms and provided information on their dialysis center. Patients (n=404) completed the CAHPS-ICH survey (comprising 58 questions) providing six scores for the assessment of patient experience of care. CAHPS-ICH item-scale convergence, discrimination, and reliability were evaluated for multi-item scales. Floor and ceiling effects were estimated for all six scores. Patient (demographics, dialysis history, vascular access method) and facility characteristics (size, ratio of patients-to-physicians, nurses, and technicians) associated with the CAHPS-ICH scores were also evaluated. Results Item-scale correlations and internal consistency reliability estimates provided support for the nephrologists’ communication (range, 0.16–0.71; α=0.81) and quality of care (range, 0.16–0.76; α=0.90) composites. However, the patient information composite had low internal consistency reliability (α=0.55). Provider-to-patient ratios (range, 2.37 for facilities with >36 patients per physician to 2.8 for those with <8 patients per physician) and time spent in the waiting room (3.44 for >15 minutes of waiting time to 3.75 for 5 to <10 minutes) were characteristics most consistently related to patients’ perceptions of dialysis care. Conclusions CAHPS-ICH is a potentially valuable and informative tool for the evaluation of patients’ experiences with dialysis care. Additional studies are needed to estimate clinically meaningful differences between care providers. PMID:24832092
Koltun, G.F.
2001-01-01
This report provides data and methods to aid in the hydrologic design or evaluation of impounding reservoirs and side-channel reservoirs used for water supply in Ohio. Data from 117 streamflow-gaging stations throughout Ohio were analyzed by means of nonsequential-mass-curve-analysis techniques to develop relations between storage requirements, water demand, duration, and frequency. Information also is provided on minimum runoff for selected durations and frequencies. Systematic record lengths for the streamflow-gaging stations ranged from about 10 to 75 years; however, in many cases, additional streamflow record was synthesized. For impounding reservoirs, families of curves are provided to facilitate the estimation of storage requirements as a function of demand and the ratio of the 7-day, 2-year low flow to the mean annual flow. Information is provided with which to evaluate separately the effects of evaporation on storage requirements. Comparisons of storage requirements for impounding reservoirs determined by nonsequential-mass-curve-analysis techniques with storage requirements determined by annual-mass-curve techniques that employ probability routing to account for carryover-storage requirements indicate that large differences in computed required storages can result from the two methods, particularly for conditions where demand cannot be met from within-year storage. For side-channel reservoirs, tables of demand-storage-frequency information are provided for a primary pump relation consisting of one variable-speed pump with a pumping capacity that ranges from 0.1 to 20 times demand. Tables of adjustment ratios are provided to facilitate determination of storage requirements for 19 other pump sets consisting of assorted combinations of fixed-speed pumps or variable-speed pumps with aggregate pumping capacities smaller than or equal to the primary pump relation. The effects of evaporation on side-channel reservoir storage requirements are incorporated into the storage-requirement estimates. The effects of an instream-flow requirement equal to the 80-percent-duration flow are also incorporated into the storage-requirement estimates.
Quintela-del-Río, Alejandro; Francisco-Fernández, Mario
2011-02-01
The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Tang, Cuong Q; Humphreys, Aelys M; Fontaneto, Diego; Barraclough, Timothy G; Paradis, Emmanuel
2014-01-01
Coalescent-based species delimitation methods combine population genetic and phylogenetic theory to provide an objective means for delineating evolutionarily significant units of diversity. The generalised mixed Yule coalescent (GMYC) and the Poisson tree process (PTP) are methods that use ultrametric (GMYC or PTP) or non-ultrametric (PTP) gene trees as input, intended for use mostly with single-locus data such as DNA barcodes. Here, we assess how robust the GMYC and PTP are to different phylogenetic reconstruction and branch smoothing methods. We reconstruct over 400 ultrametric trees using up to 30 different combinations of phylogenetic and smoothing methods and perform over 2000 separate species delimitation analyses across 16 empirical data sets. We then assess how variable diversity estimates are, in terms of richness and identity, with respect to species delimitation, phylogenetic and smoothing methods. The PTP method generally generates diversity estimates that are more robust to different phylogenetic methods. The GMYC is more sensitive, but provides consistent estimates for BEAST trees. The lower consistency of GMYC estimates is likely a result of differences among gene trees introduced by the smoothing step. Unresolved nodes (real anomalies or methodological artefacts) affect both GMYC and PTP estimates, but have a greater effect on GMYC estimates. Branch smoothing is a difficult step and perhaps an underappreciated source of bias that may be widespread among studies of diversity and diversification. Nevertheless, careful choice of phylogenetic method does produce equivalent PTP and GMYC diversity estimates. We recommend simultaneous use of the PTP model with any model-based gene tree (e.g. RAxML) and GMYC approaches with BEAST trees for obtaining species hypotheses. PMID:25821577
NASA Astrophysics Data System (ADS)
Shock, Everetr L.; Koretsky, Carla M.
1995-04-01
Regression of standard state equilibrium constants with the revised Helgeson-Kirkham-Flowers (HKF) equation of state allows evaluation of standard partial molal entropies ( overlineSo) of aqueous metal-organic complexes involving monovalent organic acid ligands. These values of overlineSo provide the basis for correlations that can be used, together with correlation algorithms among standard partial molal properties of aqueous complexes and equation-of-state parameters, to estimate thermodynamic properties including equilibrium constants for complexes between aqueous metals and several monovalent organic acid ligands at the elevated pressures and temperatures of many geochemical processes which involve aqueous solutions. Data, parameters, and estimates are given for 270 formate, propanoate, n-butanoate, n-pentanoate, glycolate, lactate, glycinate, and alanate complexes, and a consistent algorithm is provided for making other estimates. Standard partial molal entropies of association ( Δ -Sro) for metal-monovalent organic acid ligand complexes fall into at least two groups dependent upon the type of functional groups present in the ligand. It is shown that isothermal correlations among equilibrium constants for complex formation are consistent with one another and with similar correlations for inorganic metal-ligand complexes. Additional correlations allow estimates of standard partial molal Gibbs free energies of association at 25°C and 1 bar which can be used in cases where no experimentally derived values are available.
Travaglini, Davide; Fattorini, Lorenzo; Barbati, Anna; Bottalico, Francesca; Corona, Piermaria; Ferretti, Marco; Chirici, Gherardo
2013-04-01
A correct characterization of the status and trend of forest condition is essential to support reporting processes at national and international level. An international forest condition monitoring has been implemented in Europe since 1987 under the auspices of the International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests). The monitoring is based on harmonized methodologies, with individual countries being responsible for its implementation. Due to inconsistencies and problems in sampling design, however, the ICP Forests network is not able to produce reliable quantitative estimates of forest condition at European and sometimes at country level. This paper proposes (1) a set of requirements for status and change assessment and (2) a harmonized sampling strategy able to provide unbiased and consistent estimators of forest condition parameters and of their changes at both country and European level. Under the assumption that a common definition of forest holds among European countries, monitoring objectives, parameters of concern and accuracy indexes are stated. On the basis of fixed-area plot sampling performed independently in each country, an unbiased and consistent estimator of forest defoliation indexes is obtained at both country and European level, together with conservative estimators of their sampling variance and power in the detection of changes. The strategy adopts a probabilistic sampling scheme based on fixed-area plots selected by means of systematic or stratified schemes. Operative guidelines for its application are provided.
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.
2012-08-01
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.
Nguyen, Hien D; Wood, Ian A
2016-04-01
Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the compact engine model (CEM). In this step, the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion control law development.
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the 'compact engine model' (CEM). In this step the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion-control-law development.
Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.
2017-01-01
Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883
GGFC Special Bureau for Loading: current status and plans
NASA Astrophysics Data System (ADS)
van Dam, T.; Plag, H.-P.; Francis, O.; Gegout, P.
The Earth's surface is perpetually being displaced due to temporally varying atmospheric, oceanic and continental water mass surface loads. These non-geodynamic signals are of substantial magnitude that they contribute significantly to the scatter in geodetic observations of crustal motion. In February, 2002, the International Earth Rotation Service (IERS) established a Special Bureau of Loading (SBL) whose primary charge is to provide consistent and valid estimates of surface mass loading effects to the IERS community for the purpose of correcting geodetic time series. Here we outline the primary principles involved in modelling the surface displacements and gravity changes induced by surface mass loading including the basic theory, the Earth model and the surface load data. We then identify a list of operational issues, including product validation, that need to be addressed by the SBL before products can be provided to the community. Finally, we outline areas for future research to further improve the loading estimates. We conclude by formulating a recommendation on the best procedure for including loading corrections into geodetic data. Success of the SBL will depend on our ability to efficiently provide consistent and reliable estimates of surface mass loading effects. It is imperative that we work closely with the existing Global Geophysical Fluids Center (GGFC) Special Bureaus and with the community to as much as possible to verify the products.
Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation
NASA Technical Reports Server (NTRS)
Mook, D. J.; Lew, Jiann-Shiun
1991-01-01
Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.
The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.
Muller, A; Pontonnier, C; Dumont, G
2018-02-01
The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.
Reitz, Meredith; Sanford, Ward E.; Senay, Gabriel; Cazenas, J.
2017-01-01
This study presents new data-driven, annual estimates of the division of precipitation into the recharge, quick-flow runoff, and evapotranspiration (ET) water budget components for 2000-2013 for the contiguous United States (CONUS). The algorithms used to produce these maps ensure water budget consistency over this broad spatial scale, with contributions from precipitation influx attributed to each component at 800 m resolution. The quick-flow runoff estimates for the contribution to the rapidly varying portion of the hydrograph are produced using data from 1,434 gaged watersheds, and depend on precipitation, soil saturated hydraulic conductivity, and surficial geology type. Evapotranspiration estimates are produced from a regression using water balance data from 679 gaged watersheds and depend on land cover, temperature, and precipitation. The quick-flow and ET estimates are combined to calculate recharge as the remainder of precipitation. The ET and recharge estimates are checked against independent field data, and the results show good agreement. Comparisons of recharge estimates with groundwater extraction data show that in 15% of the country, groundwater is being extracted at rates higher than the local recharge. These maps of the internally consistent water budget components of recharge, quick-flow runoff, and ET, being derived from and tested against data, are expected to provide reliable first-order estimates of these quantities across the CONUS, even where field measurements are sparse.
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
Aralis, Hilary; Brookmeyer, Ron
2017-01-01
Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.
Risk estimation using probability machines.
Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D
2014-03-01
Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.
Interpreting findings from Mendelian randomization using the MR-Egger method.
Burgess, Stephen; Thompson, Simon G
2017-05-01
Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.
Evaluation and Application of Satellite-Based Latent Heating Profile Estimation Methods
NASA Technical Reports Server (NTRS)
Olson, William S.; Grecu, Mircea; Yang, Song; Tao, Wei-Kuo
2004-01-01
In recent years, methods for estimating atmospheric latent heating vertical structure from both passive and active microwave remote sensing have matured to the point where quantitative evaluation of these methods is the next logical step. Two approaches for heating algorithm evaluation are proposed: First, application of heating algorithms to synthetic data, based upon cloud-resolving model simulations, can be used to test the internal consistency of heating estimates in the absence of systematic errors in physical assumptions. Second, comparisons of satellite-retrieved vertical heating structures to independent ground-based estimates, such as rawinsonde-derived analyses of heating, provide an additional test. The two approaches are complementary, since systematic errors in heating indicated by the second approach may be confirmed by the first. A passive microwave and combined passive/active microwave heating retrieval algorithm are evaluated using the described approaches. In general, the passive microwave algorithm heating profile estimates are subject to biases due to the limited vertical heating structure information contained in the passive microwave observations. These biases may be partly overcome by including more environment-specific a priori information into the algorithm s database of candidate solution profiles. The combined passive/active microwave algorithm utilizes the much higher-resolution vertical structure information provided by spaceborne radar data to produce less biased estimates; however, the global spatio-temporal sampling by spaceborne radar is limited. In the present study, the passive/active microwave algorithm is used to construct a more physically-consistent and environment-specific set of candidate solution profiles for the passive microwave algorithm and to help evaluate errors in the passive algorithm s heating estimates. Although satellite estimates of latent heating are based upon instantaneous, footprint- scale data, suppression of random errors requires averaging to at least half-degree resolution. Analysis of mesoscale and larger space-time scale phenomena based upon passive and passive/active microwave heating estimates from TRMM, SSMI, and AMSR data will be presented at the conference.
DOT National Transportation Integrated Search
2003-09-01
The Urban Mobility Study Report procedures provide estimates of mobility at the areawide level. The approach that is used describes congestion in consistent ways using generally available data allowing for comparisons across urban areas or groups of ...
Failure of self-consistency in the discrete resource model of visual working memory.
Bays, Paul M
2018-06-03
The discrete resource model of working memory proposes that each individual has a fixed upper limit on the number of items they can store at one time, due to division of memory into a few independent "slots". According to this model, responses on short-term memory tasks consist of a mixture of noisy recall (when the tested item is in memory) and random guessing (when the item is not in memory). This provides two opportunities to estimate capacity for each observer: first, based on their frequency of random guesses, and second, based on the set size at which the variability of stored items reaches a plateau. The discrete resource model makes the simple prediction that these two estimates will coincide. Data from eight published visual working memory experiments provide strong evidence against such a correspondence. These results present a challenge for discrete models of working memory that impose a fixed capacity limit. Copyright © 2018 The Author. Published by Elsevier Inc. All rights reserved.
Constraints on the FRB rate at 700-900 MHz
NASA Astrophysics Data System (ADS)
Connor, Liam; Lin, Hsiu-Hsien; Masui, Kiyoshi; Oppermann, Niels; Pen, Ue-Li; Peterson, Jeffrey B.; Roman, Alexander; Sievers, Jonathan
2016-07-01
Estimating the all-sky rate of fast radio bursts (FRBs) has been difficult due to small-number statistics and the fact that they are seen by disparate surveys in different regions of the sky. In this paper we provide limits for the FRB rate at 800 MHz based on the only burst detected at frequencies below 1.4 GHz, FRB 110523. We discuss the difficulties in rate estimation, particularly in providing an all-sky rate above a single fluence threshold. We find an implied rate between 700 and 900 MHz that is consistent with the rate at 1.4 GHz, scaling to 6.4^{+29.5}_{-5.0} × 10^3 sky-1 d-1 for an HTRU-like survey. This is promising for upcoming experiments below a GHz like CHIME and UTMOST, for which we forecast detection rates. Given 110523's discovery at 32σ with nothing weaker detected, down to the threshold of 8σ, we find consistency with a Euclidean flux distribution but disfavour steep distributions, ruling out γ > 2.2.
Rummer, Jodie L.; Binning, Sandra A.; Roche, Dominique G.; Johansen, Jacob L.
2016-01-01
Respirometry is frequently used to estimate metabolic rates and examine organismal responses to environmental change. Although a range of methodologies exists, it remains unclear whether differences in chamber design and exercise (type and duration) produce comparable results within individuals and whether the most appropriate method differs across taxa. We used a repeated-measures design to compare estimates of maximal and standard metabolic rates (MMR and SMR) in four coral reef fish species using the following three methods: (i) prolonged swimming in a traditional swimming respirometer; (ii) short-duration exhaustive chase with air exposure followed by resting respirometry; and (iii) short-duration exhaustive swimming in a circular chamber. We chose species that are steady/prolonged swimmers, using either a body–caudal fin or a median–paired fin swimming mode during routine swimming. Individual MMR estimates differed significantly depending on the method used. Swimming respirometry consistently provided the best (i.e. highest) estimate of MMR in all four species irrespective of swimming mode. Both short-duration protocols (exhaustive chase and swimming in a circular chamber) produced similar MMR estimates, which were up to 38% lower than those obtained during prolonged swimming. Furthermore, underestimates were not consistent across swimming modes or species, indicating that a general correction factor cannot be used. However, SMR estimates (upon recovery from both of the exhausting swimming methods) were consistent across both short-duration methods. Given the increasing use of metabolic data to assess organismal responses to environmental stressors, we recommend carefully considering respirometry protocols before experimentation. Specifically, results should not readily be compared across methods; discrepancies could result in misinterpretation of MMR and aerobic scope. PMID:27382471
Sbarciog, M; Moreno, J A; Vande Wouwer, A
2014-01-01
This paper presents the estimation of the unknown states and inputs of an anaerobic digestion system characterized by a two-step reaction model. The estimation is based on the measurement of the two substrate concentrations and of the outflow rate of biogas and relies on the use of an observer, consisting of three parts. The first is a generalized super-twisting observer, which estimates a linear combination of the two input concentrations. The second is an asymptotic observer, which provides one of the two biomass concentrations, whereas the third is a super-twisting observer for one of the input concentrations and the second biomass concentration.
Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter
Reddy, Chinthala P.; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956
Reddy, Chinthala P; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.
How General are Risk Preferences? Choices under Uncertainty in Different Domains*
Einav, Liran; Finkelstein, Amy; Pascu, Iuliana; Cullen, Mark R.
2011-01-01
We analyze the extent to which individuals’ choices over five employer-provided insurance coverage decisions and one 401(k) investment decision exhibit systematic patterns, as would be implied by a general utility component of risk preferences. We provide evidence consistent with an important domain-general component that operates across all insurance choices. We find a considerably weaker relationship between one's insurance decisions and 401(k) asset allocation, although this relationship appears larger for more “financially sophisticated” individuals. Estimates from a stylized coverage choice model suggest that up to thirty percent of our sample makes choices that may be consistent across all six domains. PMID:24634517
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang
Here, we present a new data set of annual historical (1750–2014) anthropogenic chemically reactive gases (CO, CH 4, NH 3, NO x, SO 2, NMVOCs), carbonaceous aerosols (black carbon – BC, and organic carbon – OC), and CO 2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the samemore » activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.« less
Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang; ...
2018-01-29
Here, we present a new data set of annual historical (1750–2014) anthropogenic chemically reactive gases (CO, CH 4, NH 3, NO x, SO 2, NMVOCs), carbonaceous aerosols (black carbon – BC, and organic carbon – OC), and CO 2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the samemore » activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.« less
NASA Astrophysics Data System (ADS)
Hoesly, Rachel M.; Smith, Steven J.; Feng, Leyang; Klimont, Zbigniew; Janssens-Maenhout, Greet; Pitkanen, Tyler; Seibert, Jonathan J.; Vu, Linh; Andres, Robert J.; Bolt, Ryan M.; Bond, Tami C.; Dawidowski, Laura; Kholod, Nazar; Kurokawa, June-ichi; Li, Meng; Liu, Liang; Lu, Zifeng; Moura, Maria Cecilia P.; O'Rourke, Patrick R.; Zhang, Qiang
2018-01-01
We present a new data set of annual historical (1750-2014) anthropogenic chemically reactive gases (CO, CH4, NH3, NOx, SO2, NMVOCs), carbonaceous aerosols (black carbon - BC, and organic carbon - OC), and CO2 developed with the Community Emissions Data System (CEDS). We improve upon existing inventories with a more consistent and reproducible methodology applied to all emission species, updated emission factors, and recent estimates through 2014. The data system relies on existing energy consumption data sets and regional and country-specific inventories to produce trends over recent decades. All emission species are consistently estimated using the same activity data over all time periods. Emissions are provided on an annual basis at the level of country and sector and gridded with monthly seasonality. These estimates are comparable to, but generally slightly higher than, existing global inventories. Emissions over the most recent years are more uncertain, particularly in low- and middle-income regions where country-specific emission inventories are less available. Future work will involve refining and updating these emission estimates, estimating emissions' uncertainty, and publication of the system as open-source software.
Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model
NASA Astrophysics Data System (ADS)
Ahlgren, K.; Li, X.
2017-12-01
Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model provides an additional metric to assess the performance of each bias estimation method. The geoid model accuracies are assessed using the two GSVS lines and GPS-leveling data across the United States.
Improved protocol and data analysis for accelerated shelf-life estimation of solid dosage forms.
Waterman, Kenneth C; Carella, Anthony J; Gumkowski, Michael J; Lukulay, Patrick; MacDonald, Bruce C; Roy, Michael C; Shamblin, Sheri L
2007-04-01
To propose and test a new accelerated aging protocol for solid-state, small molecule pharmaceuticals which provides faster predictions for drug substance and drug product shelf-life. The concept of an isoconversion paradigm, where times in different temperature and humidity-controlled stability chambers are set to provide a critical degradant level, is introduced for solid-state pharmaceuticals. Reliable estimates for temperature and relative humidity effects are handled using a humidity-corrected Arrhenius equation, where temperature and relative humidity are assumed to be orthogonal. Imprecision is incorporated into a Monte-Carlo simulation to propagate the variations inherent in the experiment. In early development phases, greater imprecision in predictions is tolerated to allow faster screening with reduced sampling. Early development data are then used to design appropriate test conditions for more reliable later stability estimations. Examples are reported showing that predicted shelf-life values for lower temperatures and different relative humidities are consistent with the measured shelf-life values at those conditions. The new protocols and analyses provide accurate and precise shelf-life estimations in a reduced time from current state of the art.
Van Metre, P.C.; Fuller, C.C.
2009-01-01
Determining atmospheric deposition rates of mercury and other contaminants using lake sediment cores requires a quantitative understanding of sediment focusing. Here we present a novel approach that solves mass-balance equations for two cores algebraically to estimate contaminant contributions to sediment from direct atmospheric fallout and from watershed and in-lake focusing. The model is applied to excess 210Pb and Hg in cores from Hobbs Lake, a high-altitude lake in Wyoming. Model results for excess 210Pb are consistent with estimates of fallout and focusing factors computed using excess 210Pb burdens in lake cores and soil cores from the watershed and model results for Hg fallout are consistent with fallout estimated using the soil-core-based 210Pb focusing factors. The lake cores indicate small increases in mercury deposition beginning in the late 1800s and large increases after 1940, with the maximum at the tops of the cores of 16-20 ??g/m 2year. These results suggest that global Hg emissions and possibly regional emissions in the western United States are affecting the north-central Rocky Mountains. Hg fallout estimates are generally consistent with fallout reported from an ice core from the nearby Upper Fremont Glacier, but with several notable differences. The model might not work for lakes with complex geometries and multiple sediment inputs, but for lakes with simple geometries, like Hobbs, it can provide a quantitative approach for evaluating sediment focusing and estimating contaminant fallout.
Skeletal Correlates for Body Mass Estimation in Modern and Fossil Flying Birds
Field, Daniel J.; Lynner, Colton; Brown, Christian; Darroch, Simon A. F.
2013-01-01
Scaling relationships between skeletal dimensions and body mass in extant birds are often used to estimate body mass in fossil crown-group birds, as well as in stem-group avialans. However, useful statistical measurements for constraining the precision and accuracy of fossil mass estimates are rarely provided, which prevents the quantification of robust upper and lower bound body mass estimates for fossils. Here, we generate thirteen body mass correlations and associated measures of statistical robustness using a sample of 863 extant flying birds. By providing robust body mass regressions with upper- and lower-bound prediction intervals for individual skeletal elements, we address the longstanding problem of body mass estimation for highly fragmentary fossil birds. We demonstrate that the most precise proxy for estimating body mass in the overall dataset, measured both as coefficient determination of ordinary least squares regression and percent prediction error, is the maximum diameter of the coracoid’s humeral articulation facet (the glenoid). We further demonstrate that this result is consistent among the majority of investigated avian orders (10 out of 18). As a result, we suggest that, in the majority of cases, this proxy may provide the most accurate estimates of body mass for volant fossil birds. Additionally, by presenting statistical measurements of body mass prediction error for thirteen different body mass regressions, this study provides a much-needed quantitative framework for the accurate estimation of body mass and associated ecological correlates in fossil birds. The application of these regressions will enhance the precision and robustness of many mass-based inferences in future paleornithological studies. PMID:24312392
Earthquake design criteria for small hydro projects in the Philippines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, P.P.; McCandless, D.H.; Asce, M.
1995-12-31
The definition of the seismic environment and seismic design criteria of more than twenty small hydro projects in the northern part of the island of Luzon in the Philippines took a special urgency on the wake of the Magnitude 7.7 earthquake that shook the island on July 17, 1990. The paper describes the approach followed to determine design shaking level criteria at each hydro site consistent with the seismic environment estimated at that same site. The approach consisted of three steps: (1) Seismicity: understanding the mechanisms and tectonic features susceptible to generate seismicity and estimating the associated seismicity levels, (2)more » Seismic Hazard: in the absence of an accurate historical record, using statistics to determine the expected level of ground shaking at a site during the operational 100-year design life of each Project, and (3) Criteria Selection: finally and most importantly, exercising judgment in estimating the final proposed level of shaking at each site. The resulting characteristics of estimated seismicity and seismic hazard and the proposed final earthquake design criteria are provided.« less
Lake Powell management alternatives and values: CVM estimates of recreation benefits
Douglas, A.J.; Harpman, D.A.
2004-01-01
This paper presents data analyses based on information gathered from a recreation survey distributed during the spring of 1997 at Lake Powell. Recreation-linked management issues are the foci of the survey and this discussion. Survey responses to contingent valuation method (CVM) queries included in the questionnaire quantify visitor recreation values. The CVM estimates of the benefits provided by potential resource improvements are compared with the costs of the improvements in a benefit-cost analysis. The CVM questions covered three resources management issues including water quality improvement, sport fish harvest enhancement, and archeological site protection and restoration. The estimated benefits are remarkably high relative to the costs and range from $6 to $60 million per year. The dichotomous choice format was used in each of three resource CVM question scenarios. There were two levels of enhancement for each resource. There are, therefore, several consistency requirements—some of them unique to the dichotomous choice format—that the data and benefit estimates must satisfy. These consistency tests are presented in detail in the ensuing analysis.
Consistent Estimation of Gibbs Energy Using Component Contributions
Milo, Ron; Fleming, Ronan M. T.
2013-01-01
Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism. PMID:23874165
Coefficient Alpha and Reliability of Scale Scores
ERIC Educational Resources Information Center
Almehrizi, Rashid S.
2013-01-01
The majority of large-scale assessments develop various score scales that are either linear or nonlinear transformations of raw scores for better interpretations and uses of assessment results. The current formula for coefficient alpha (a; the commonly used reliability coefficient) only provides internal consistency reliability estimates of raw…
A comparison of kinematic algorithms to estimate gait events during overground running.
Smith, Laura; Preece, Stephen; Mason, Duncan; Bramah, Christopher
2015-01-01
The gait cycle is frequently divided into two distinct phases, stance and swing, which can be accurately determined from ground reaction force data. In the absence of such data, kinematic algorithms can be used to estimate footstrike and toe-off. The performance of previously published algorithms is not consistent between studies. Furthermore, previous algorithms have not been tested at higher running speeds nor used to estimate ground contact times. Therefore the purpose of this study was to both develop a new, custom-designed, event detection algorithm and compare its performance with four previously tested algorithms at higher running speeds. Kinematic and force data were collected on twenty runners during overground running at 5.6m/s. The five algorithms were then implemented and estimated times for footstrike, toe-off and contact time were compared to ground reaction force data. There were large differences in the performance of each algorithm. The custom-designed algorithm provided the most accurate estimation of footstrike (True Error 1.2 ± 17.1 ms) and contact time (True Error 3.5 ± 18.2 ms). Compared to the other tested algorithms, the custom-designed algorithm provided an accurate estimation of footstrike and toe-off across different footstrike patterns. The custom-designed algorithm provides a simple but effective method to accurately estimate footstrike, toe-off and contact time from kinematic data. Copyright © 2014 Elsevier B.V. All rights reserved.
Developing Methods for Fraction Cover Estimation Toward Global Mapping of Ecosystem Composition
NASA Astrophysics Data System (ADS)
Roberts, D. A.; Thompson, D. R.; Dennison, P. E.; Green, R. O.; Kokaly, R. F.; Pavlick, R.; Schimel, D.; Stavros, E. N.
2016-12-01
Terrestrial vegetation seldom covers an entire pixel due to spatial mixing at many scales. Estimating the fractional contributions of photosynthetic green vegetation (GV), non-photosynthetic vegetation (NPV), and substrate (soil, rock, etc.) to mixed spectra can significantly improve quantitative remote measurement of terrestrial ecosystems. Traditional methods for estimating fractional vegetation cover rely on vegetation indices that are sensitive to variable substrate brightness, NPV and sun-sensor geometry. Spectral mixture analysis (SMA) is an alternate framework that provides estimates of fractional cover. However, simple SMA, in which the same set of endmembers is used for an entire image, fails to account for natural spectral variability within a cover class. Multiple Endmember Spectral Mixture Analysis (MESMA) is a variant of SMA that allows the number and types of pure spectra to vary on a per-pixel basis, thereby accounting for endmember variability and generating more accurate cover estimates, but at a higher computational cost. Routine generation and delivery of GV, NPV, and substrate (S) fractions using MESMA is currently in development for large, diverse datasets acquired by the Airborne Visible Infrared Imaging Spectrometer (AVIRIS). We present initial results, including our methodology for ensuring consistency and generalizability of fractional cover estimates across a wide range of regions, seasons, and biomes. We also assess uncertainty and provide a strategy for validation. GV, NPV, and S fractions are an important precursor for deriving consistent measurements of ecosystem parameters such as plant stress and mortality, functional trait assessment, disturbance susceptibility and recovery, and biomass and carbon stock assessment. Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
Estimating 3D tilt from local image cues in natural scenes
Burge, Johannes; McCann, Brian C.; Geisler, Wilson S.
2016-01-01
Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then analyzed the relationship between ground-truth tilt and image cue values. Our analysis is free of assumptions about the joint probability distributions and yields the Bayes optimal estimates of tilt, given the cue values. Rich results emerge: (a) typical tilt estimates are only moderately accurate and strongly influenced by the cardinal bias in the prior probability distribution; (b) when cue values are similar, or when slant is greater than 40°, estimates are substantially more accurate; (c) when luminance and texture cues agree, they often veto the disparity cue, and when they disagree, they have little effect; and (d) simplifying assumptions common in the cue combination literature is often justified for estimating tilt in natural scenes. The fact that tilt estimates are typically not very accurate is consistent with subjective impressions from viewing small patches of natural scene. The fact that estimates are substantially more accurate for a subset of image locations is also consistent with subjective impressions and with the hypothesis that perceived surface orientation, at more global scales, is achieved by interpolation or extrapolation from estimates at key locations. PMID:27738702
Container Surface Evaluation by Function Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
Container images are analyzed for specific surface features, such as, pits, cracks, and corrosion. The detection of these features is confounded with complicating features. These complication features include: shape/curvature, welds, edges, scratches, foreign objects among others. A method is provided to discriminate between the various features. The method consists of estimating the image background, determining a residual image and post processing to determine the features present. The methodology is not finalized but demonstrates the feasibility of a method to determine the kind and size of the features present.
NASA Technical Reports Server (NTRS)
Duffy, James B.
1992-01-01
The report describes the work breakdown structure (WBS) and its associated WBS dictionary for task area 1 of contract NAS8-39207, advanced transportation system studies (ATSS). This WBS format is consistent with the preliminary design level of detail employed by both task area 1 and task area 4 in the ATSS study and is intended to provide an estimating structure for parametric cost estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, J; Fan, J; Hu, W
Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less
Curtis, K. Alexandra; Moore, Jeffrey E.; Benson, Scott R.
2015-01-01
Biological limit reference points (LRPs) for fisheries catch represent upper bounds that avoid undesirable population states. LRPs can support consistent management evaluation among species and regions, and can advance ecosystem-based fisheries management. For transboundary species, LRPs prorated by local abundance can inform local management decisions when international coordination is lacking. We estimated LRPs for western Pacific leatherbacks in the U.S. West Coast Exclusive Economic Zone (WCEEZ) using three approaches with different types of information on local abundance. For the current application, the best-informed LRP used a local abundance estimate derived from nest counts, vital rate information, satellite tag data, and fishery observer data, and was calculated with a Potential Biological Removal estimator. Management strategy evaluation was used to set tuning parameters of the LRP estimators to satisfy risk tolerances for falling below population thresholds, and to evaluate sensitivity of population outcomes to bias in key inputs. We estimated local LRPs consistent with three hypothetical management objectives: allowing the population to rebuild to its maximum net productivity level (4.7 turtles per five years), limiting delay of population rebuilding (0.8 turtles per five years), or only preventing further decline (7.7 turtles per five years). These LRPs pertain to all human-caused removals and represent the WCEEZ contribution to meeting population management objectives within a broader international cooperative framework. We present multi-year estimates, because at low LRP values, annual assessments are prone to substantial error that can lead to volatile and costly management without providing further conservation benefit. The novel approach and the performance criteria used here are not a direct expression of the “jeopardy” standard of the U.S. Endangered Species Act, but they provide useful assessment information and could help guide international management frameworks. Given the range of abundance data scenarios addressed, LRPs should be estimable for many other areas, populations, and taxa. PMID:26368557
Curtis, K Alexandra; Moore, Jeffrey E; Benson, Scott R
2015-01-01
Biological limit reference points (LRPs) for fisheries catch represent upper bounds that avoid undesirable population states. LRPs can support consistent management evaluation among species and regions, and can advance ecosystem-based fisheries management. For transboundary species, LRPs prorated by local abundance can inform local management decisions when international coordination is lacking. We estimated LRPs for western Pacific leatherbacks in the U.S. West Coast Exclusive Economic Zone (WCEEZ) using three approaches with different types of information on local abundance. For the current application, the best-informed LRP used a local abundance estimate derived from nest counts, vital rate information, satellite tag data, and fishery observer data, and was calculated with a Potential Biological Removal estimator. Management strategy evaluation was used to set tuning parameters of the LRP estimators to satisfy risk tolerances for falling below population thresholds, and to evaluate sensitivity of population outcomes to bias in key inputs. We estimated local LRPs consistent with three hypothetical management objectives: allowing the population to rebuild to its maximum net productivity level (4.7 turtles per five years), limiting delay of population rebuilding (0.8 turtles per five years), or only preventing further decline (7.7 turtles per five years). These LRPs pertain to all human-caused removals and represent the WCEEZ contribution to meeting population management objectives within a broader international cooperative framework. We present multi-year estimates, because at low LRP values, annual assessments are prone to substantial error that can lead to volatile and costly management without providing further conservation benefit. The novel approach and the performance criteria used here are not a direct expression of the "jeopardy" standard of the U.S. Endangered Species Act, but they provide useful assessment information and could help guide international management frameworks. Given the range of abundance data scenarios addressed, LRPs should be estimable for many other areas, populations, and taxa.
2013-01-01
Background Administrative databases are widely available and have been extensively used to provide estimates of chronic disease prevalence for the purpose of surveillance of both geographical and temporal trends. There are, however, other sources of data available, such as medical records from primary care and national surveys. In this paper we compare disease prevalence estimates obtained from these three different data sources. Methods Data from general practitioners (GP) and administrative transactions for health services were collected from five Italian regions (Veneto, Emilia Romagna, Tuscany, Marche and Sicily) belonging to all the three macroareas of the country (North, Center, South). Crude prevalence estimates were calculated by data source and region for diabetes, ischaemic heart disease, heart failure and chronic obstructive pulmonary disease (COPD). For diabetes and COPD, prevalence estimates were also obtained from a national health survey. When necessary, estimates were adjusted for completeness of data ascertainment. Results Crude prevalence estimates of diabetes in administrative databases (range: from 4.8% to 7.1%) were lower than corresponding GP (6.2%-8.5%) and survey-based estimates (5.1%-7.5%). Geographical trends were similar in the three sources and estimates based on treatment were the same, while estimates adjusted for completeness of ascertainment (6.1%-8.8%) were slightly higher. For ischaemic heart disease administrative and GP data sources were fairly consistent, with prevalence ranging from 3.7% to 4.7% and from 3.3% to 4.9%, respectively. In the case of heart failure administrative estimates were consistently higher than GPs’ estimates in all five regions, the highest difference being 1.4% vs 1.1%. For COPD the estimates from administrative data, ranging from 3.1% to 5.2%, fell into the confidence interval of the Survey estimates in four regions, but failed to detect the higher prevalence in the most Southern region (4.0% in administrative data vs 6.8% in survey data). The prevalence estimates for COPD from GP data were consistently higher than the corresponding estimates from the other two sources. Conclusion This study supports the use of data from Italian administrative databases to estimate geographic differences in population prevalence of ischaemic heart disease, treated diabetes, diabetes mellitus and heart failure. The algorithm for COPD used in this study requires further refinement. PMID:23297821
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Nichols, James D.; Pollock, Kenneth H.; Hines, James E.
1984-01-01
The robust design of Pollock (1982) was used to estimate parameters of a Maryland M. pennsylvanicus population. Closed model tests provided strong evidence of heterogeneity of capture probability, and model M eta (Otis et al., 1978) was selected as the most appropriate model for estimating population size. The Jolly-Seber model goodness-of-fit test indicated rejection of the model for this data set, and the M eta estimates of population size were all higher than the Jolly-Seber estimates. Both of these results are consistent with the evidence of heterogeneous capture probabilities. The authors thus used M eta estimates of population size, Jolly-Seber estimates of survival rate, and estimates of birth-immigration based on a combination of the population size and survival rate estimates. Advantages of the robust design estimates for certain inference procedures are discussed, and the design is recommended for future small mammal capture-recapture studies directed at estimation.
Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy
2007-02-01
Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Peterson, D. A.; Curtis, C. A.; Schmidt, C. C.; Hoffman, J.; Prins, E. M.
2014-12-01
The Fire Locating and Monitoring of Burning Emissions (FLAMBE) system converts satellite observations of thermally anomalous pixels into spatially and temporally continuous estimates of smoke release from open biomass burning. This system currently processes data from a constellation of 5 geostationary and 2 polar-orbiting sensors. Additional sensors, including NPP VIIRS and the imager on the Korea COMS-1 geostationary satellite, will soon be added. This constellation experiences schedule changes and outages of various durations, making the set of available scenes for fire detection highly variable on an hourly and daily basis. Adding to the complexity, the latency of the satellite data is variable between and within sensors. FLAMBE shares with many fire detection systems the goal of detecting as many fires as possible as early as possible, but the FLAMBE system must also produce a consistent estimate of smoke production with minimal artifacts from the changing constellation. To achieve this, NRL has developed a system of asynchronous processing and cross-calibration that permits satellite data to be used as it arrives, while preserving the consistency of the smoke emission estimates. This talk describes the asynchronous data ingest methodology, including latency statistics for the constellation. We also provide an overview and show results from the system we have developed to normalize multi-sensor fire detection for consistency.
NASA Astrophysics Data System (ADS)
Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.
2017-12-01
Accurate characterization of uncertainties in space-borne precipitation estimates is critical for many applications including water budget studies or prediction of natural hazards at the global scale. The GPM precipitation Level II (active and passive) and Level III (IMERG) estimates are compared to the high quality and high resolution NEXRAD-based precipitation estimates derived from the NOAA/NSSL's Multi-Radar, Multi-Sensor (MRMS) platform. A surface reference is derived from the MRMS suite of products to be accurate with known uncertainty bounds and measured at a resolution below the pixel sizes of any GPM estimate, providing great flexibility in matching to grid scales or footprints. It provides an independent and consistent reference research framework for directly evaluating GPM precipitation products across a large number of meteorological regimes as a function of resolution, accuracy and sample size. The consistency of the ground and space-based sensors in term of precipitation detection, typology and quantification are systematically evaluated. Satellite precipitation retrievals are further investigated in terms of precipitation distributions, systematic biases and random errors, influence of precipitation sub-pixel variability and comparison between satellite products. Prognostic analysis directly provides feedback to algorithm developers on how to improve the satellite estimates. Specific factors for passive (e.g. surface conditions for GMI) and active (e.g. non uniform beam filling for DPR) sensors are investigated. This cross products characterization acts as a bridge to intercalibrate microwave measurements from the GPM constellation satellites and propagate to the combined and global precipitation estimates. Precipitation features previously used to analyze Level II satellite estimates under various precipitation processes are now intoduced for Level III to test several assumptions in the IMERG algorithm. Specifically, the contribution of Level II is explicitly characterized and a rigorous characterization is performed to migrate across scales fully understanding the propagation of errors from Level II to Level III. Perpectives are presented to advance the use of uncertainty as an integral part of QPE for ground-based and space-borne sensors
Assessing REDD+ performance of countries with low monitoring capacities: the matrix approach
NASA Astrophysics Data System (ADS)
Bucki, M.; Cuypers, D.; Mayaux, P.; Achard, F.; Estreguil, C.; Grassi, G.
2012-03-01
Estimating emissions from deforestation and degradation of forests in many developing countries is so uncertain that the effects of changes in forest management could remain within error ranges (i.e. undetectable) for several years. Meanwhile UNFCCC Parties need consistent time series of meaningful performance indicators to set credible benchmarks and allocate REDD+ incentives to the countries, programs and activities that actually reduce emissions, while providing social and environmental benefits. Introducing widespread measuring of carbon in forest land (which would be required to estimate more accurately changes in emissions from degradation and forest management) will take time and considerable resources. To ensure the overall credibility and effectiveness of REDD+, parties must consider the design of cost-effective systems which can provide reliable and comparable data on anthropogenic forest emissions. Remote sensing can provide consistent time series of land cover maps for most non-Annex-I countries, retrospectively. These maps can be analyzed to identify the forests that are intact (i.e. beyond significant human influence), and whose fragmentation could be a proxy for degradation. This binary stratification of forests biomes (intact/non-intact), a transition matrix and the use of default carbon stock change factors can then be used to provide initial estimates of trends in emission changes. A proof-of-concept is provided for one biome of the Democratic Republic of the Congo over a virtual commitment period (2005-2010). This approach could allow assessment of the performance of the five REDD+ activities (deforestation, degradation, conservation, management and enhancement of forest carbon stocks) in a spatially explicit, verifiable manner. Incentives could then be tailored to prioritize activities depending on the national context and objectives.
Using the GOCE star trackers for validating the calibration of its accelerometers
NASA Astrophysics Data System (ADS)
Visser, P. N. A. M.
2017-12-01
A method for validating the calibration parameters of the six accelerometers on board the Gravity field and steady-state Ocean Circulation Explorer (GOCE) from star tracker observations that was originally tested by an end-to-end simulation, has been updated and applied to real data from GOCE. It is shown that the method provides estimates of scale factors for all three axes of the six GOCE accelerometers that are consistent at a level significantly better than 0.01 compared to the a priori calibrated value of 1. In addition, relative accelerometer biases and drift terms were estimated consistent with values obtained by precise orbit determination, where the first GOCE accelerometer served as reference. The calibration results clearly reveal the different behavior of the sensitive and less-sensitive accelerometer axes.
Singer, Donald A.; Kouda, Ryoichi
2011-01-01
Empirical evidence indicates that processes affecting number and quantity of resources in geologic settings are very general across deposit types. Sizes of permissive tracts that geologically could contain the deposits are excellent predictors of numbers of deposits. In addition, total ore tonnage of mineral deposits of a particular type in a tract is proportional to the type’s median tonnage in a tract. Regressions using size of permissive tracts and median tonnage allow estimation of number of deposits and of total tonnage of mineralization. These powerful estimators, based on 10 different deposit types from 109 permissive worldwide control tracts, generalize across deposit types. Estimates of number of deposits and of total tonnage of mineral deposits are made by regressing permissive area, and mean (in logs) tons in deposits of the type, against number of deposits and total tonnage of deposits in the tract for the 50th percentile estimates. The regression equations (R2 = 0.91 and 0.95) can be used for all deposit types just by inserting logarithmic values of permissive area in square kilometers, and mean tons in deposits in millions of metric tons. The regression equations provide estimates at the 50th percentile, and other equations are provided for 90% confidence limits for lower estimates and 10% confidence limits for upper estimates of number of deposits and total tonnage. Equations for these percentile estimates along with expected value estimates are presented here along with comparisons with independent expert estimates. Also provided are the equations for correcting for the known well-explored deposits in a tract. These deposit-density models require internally consistent grade and tonnage models and delineations for arriving at unbiased estimates.
FRAGS: estimation of coding sequence substitution rates from fragmentary data
Swart, Estienne C; Hide, Winston A; Seoighe, Cathal
2004-01-01
Background Rates of substitution in protein-coding sequences can provide important insights into evolutionary processes that are of biomedical and theoretical interest. Increased availability of coding sequence data has enabled researchers to estimate more accurately the coding sequence divergence of pairs of organisms. However the use of different data sources, alignment protocols and methods to estimate substitution rates leads to widely varying estimates of key parameters that define the coding sequence divergence of orthologous genes. Although complete genome sequence data are not available for all organisms, fragmentary sequence data can provide accurate estimates of substitution rates provided that an appropriate and consistent methodology is used and that differences in the estimates obtainable from different data sources are taken into account. Results We have developed FRAGS, an application framework that uses existing, freely available software components to construct in-frame alignments and estimate coding substitution rates from fragmentary sequence data. Coding sequence substitution estimates for human and chimpanzee sequences, generated by FRAGS, reveal that methodological differences can give rise to significantly different estimates of important substitution parameters. The estimated substitution rates were also used to infer upper-bounds on the amount of sequencing error in the datasets that we have analysed. Conclusion We have developed a system that performs robust estimation of substitution rates for orthologous sequences from a pair of organisms. Our system can be used when fragmentary genomic or transcript data is available from one of the organisms and the other is a completely sequenced genome within the Ensembl database. As well as estimating substitution statistics our system enables the user to manage and query alignment and substitution data. PMID:15005802
Raman spectroscopy of CNC-and CNF-based nanocomposites
Umesh P. Agarwal
2017-01-01
In this chapter, applications of Raman spectroscopy to nanocelluloses and nanocellulose composites are reviewed, and it is shown how use of various techniques in Raman can provide unique information. Some of the most important uses consisted of identification of cellulose nanomaterials, estimation of cellulose crystallinity, study of dispersion of cellulose...
Essays on Policy Evaluation with Endogenous Adoption
ERIC Educational Resources Information Center
Gentile, Elisabetta
2011-01-01
Over the last decade, experimental and quasi-experimental methods have been favored by researchers in empirical economics, as they provide unbiased causal estimates. However, when implementing a program, it is often not possible to randomly assign subjects to treatment, leading to a possible endogeneity bias. This dissertation consists of two…
Business/Clerical/Sales. Career Education Guide.
ERIC Educational Resources Information Center
Dependents Schools (DOD), Washington, DC. European Area.
The curriculum guide is designed to provide students with realistic training in business/clerical/sales theory and practices within the secondary educational framework and to prepare them for entry into an occupation or continuing postsecondary education. Each unit plan consists of a description of the area under consideration, estimated hours of…
Molnár, Péter K; Klanjscek, Tin; Derocher, Andrew E; Obbard, Martyn E; Lewis, Mark A
2009-08-01
Many species experience large fluctuations in food availability and depend on energy from fat and protein stores for survival, reproduction and growth. Body condition and, more specifically, energy stores thus constitute key variables in the life history of many species. Several indices exist to quantify body condition but none can provide the amount of stored energy. To estimate energy stores in mammals, we propose a body composition model that differentiates between structure and storage of an animal. We develop and parameterize the model specifically for polar bears (Ursus maritimus Phipps) but all concepts are general and the model could be easily adapted to other mammals. The model provides predictive equations to estimate structural mass, storage mass and storage energy from an appropriately chosen measure of body length and total body mass. The model also provides a means to estimate basal metabolic rates from body length and consecutive measurements of total body mass. Model estimates of body composition, structural mass, storage mass and energy density of 970 polar bears from Hudson Bay were consistent with the life history and physiology of polar bears. Metabolic rate estimates of fasting adult males derived from the body composition model corresponded closely to theoretically expected and experimentally measured metabolic rates. Our method is simple, non-invasive and provides considerably more information on the energetic status of individuals than currently available methods.
Self-reported physical activity among blacks: estimates from national surveys.
Whitt-Glover, Melicia C; Taylor, Wendell C; Heath, Gregory W; Macera, Caroline A
2007-11-01
National surveillance data provide population-level estimates of physical activity participation, but generally do not include detailed subgroup analyses, which could provide a better understanding of physical activity among subgroups. This paper presents a descriptive analysis of self-reported regular physical activity among black adults using data from the 2003 Behavioral Risk Factor Surveillance System (n=19,189), the 2004 National Health Interview Survey (n=4263), and the 1999-2004 National Health and Nutrition Examination Survey (n=3407). Analyses were conducted between January and March 2006. Datasets were analyzed separately to estimate the proportion of black adults meeting national physical activity recommendations overall and stratified by gender and other demographic subgroups. The proportion of black adults reporting regular PA ranged from 24% to 36%. Regular physical activity was highest among men; younger age groups; highest education and income groups; those who were employed and married; overweight, but not obese, men; and normal-weight women. This pattern was consistent across surveys. The observed physical activity patterns were consistent with national trends. The data suggest that older black adults and those with low education and income levels are at greatest risk for inactive lifestyles and may require additional attention in efforts to increase physical activity in black adults. The variability across datasets reinforces the need for objective measures in national surveys.
Maturation of Structural Health Management Systems for Solid Rocket Motors
NASA Technical Reports Server (NTRS)
Quing, Xinlin; Beard, Shawn; Zhang, Chang
2011-01-01
Concepts of an autonomous and automated space-compliant diagnostic system were developed for conditioned-based maintenance (CBM) of rocket motors for space exploration vehicles. The diagnostic system will provide real-time information on the integrity of critical structures on launch vehicles, improve their performance, and greatly increase crew safety while decreasing inspection costs. Using the SMART Layer technology as a basis, detailed procedures and calibration techniques for implementation of the diagnostic system were developed. The diagnostic system is a distributed system, which consists of a sensor network, local data loggers, and a host central processor. The system detects external impact to the structure. The major functions of the system include an estimate of impact location, estimate of impact force at impacted location, and estimate of the structure damage at impacted location. This system consists of a large-area sensor network, dedicated multiple local data loggers with signal processing and data analysis software to allow for real-time, in situ monitoring, and longterm tracking of structural integrity of solid rocket motors. Specifically, the system could provide easy installation of large sensor networks, onboard operation under harsh environments and loading, inspection of inaccessible areas without disassembly, detection of impact events and impact damage in real-time, and monitoring of a large area with local data processing to reduce wiring.
Analysis of the costs and payments of a coordinated stroke center and regional stroke network.
Rymer, Marilyn M; Armstrong, Edward P; Meredith, Neil R; Pham, Sissi V; Thorpe, Kevin; Kruzikas, Denise T
2013-08-01
An earlier study demonstrated significantly improved access, treatment, and outcomes after the implementation of a progressive, comprehensive stroke program at a tertiary care community hospital, Saint Luke's Neuroscience Institute (SLNI). This study evaluated the costs associated with implementing such a program. Retrospective analysis of total hospital costs and payments for treating patients with ischemic stroke at SLNI (n=1570) as program enhancement evolved over time (2005, 2007, and 2010) and compared with published national estimates. Analyses were stratified by patient demographic characteristics, patient outcomes, treatments, time, and comorbidities. Controlling for inflation, there was no difference in SLNI total costs between 2005 and either 2007 or 2010, suggesting that while SLNI provided an increased level of services, any additional expenditures were offset by efficiencies. SLNI total costs were slightly lower than published benchmarks. Consistent with previous stroke care cost estimates, the median overall differential between total hospital costs and payments for all ischemic stroke cases was negative. SLNI total costs remained consistent over time and were slightly lower than previously published estimates, suggesting that a focused, streamlined stroke program can be implemented without a significant economic impact. This finding further demonstrates that providing comprehensive stroke care with improved access and treatment may be financially feasible for other hospitals.
Dorazio, Robert M.
2012-01-01
Several models have been developed to predict the geographic distribution of a species by combining measurements of covariates of occurrence at locations where the species is known to be present with measurements of the same covariates at other locations where species occurrence status (presence or absence) is unknown. In the absence of species detection errors, spatial point-process models and binary-regression models for case-augmented surveys provide consistent estimators of a species’ geographic distribution without prior knowledge of species prevalence. In addition, these regression models can be modified to produce estimators of species abundance that are asymptotically equivalent to those of the spatial point-process models. However, if species presence locations are subject to detection errors, neither class of models provides a consistent estimator of covariate effects unless the covariates of species abundance are distinct and independently distributed from the covariates of species detection probability. These analytical results are illustrated using simulation studies of data sets that contain a wide range of presence-only sample sizes. Analyses of presence-only data of three avian species observed in a survey of landbirds in western Montana and northern Idaho are compared with site-occupancy analyses of detections and nondetections of these species.
Larson, R L; Step, D L
2012-03-01
Bovine respiratory disease complex is the leading cause of morbidity and mortality in feedlot cattle. A number of vaccines against bacterial respiratory pathogens are commercially available and researchers have studied their impact on morbidity, mortality, and other disease outcome measures in feedlot cattle. A systematic review will provide veterinarians with a rigorous and transparent evaluation of the published literature to estimate the extent of vaccine effect. Unfortunately, the published body of evidence does not provide a consistent estimate of the direction and magnitude of effectiveness in feedlot cattle vaccination against Mannheimia haemolytica, Pasteurella multocida, or Histophilus somni.
SAMICS Validation. SAMICS Support Study, Phase 3
NASA Technical Reports Server (NTRS)
1979-01-01
SAMICS provides a consistent basis for estimating array costs and compares production technology costs. A review and a validation of the SAMICS model are reported. The review had the following purposes: (1) to test the computational validity of the computer model by comparison with preliminary hand calculations based on conventional cost estimating techniques; (2) to review and improve the accuracy of the cost relationships being used by the model: and (3) to provide an independent verification to users of the model's value in decision making for allocation of research and developement funds and for investment in manufacturing capacity. It is concluded that the SAMICS model is a flexible, accurate, and useful tool for managerial decision making.
Direct single-cell biomass estimates for marine bacteria via Archimedes' principle
Cermak, Nathan; Becker, Jamie W; Knudsen, Scott M; Chisholm, Sallie W; Manalis, Scott R; Polz, Martin F
2017-01-01
Microbes are an essential component of marine food webs and biogeochemical cycles, and therefore precise estimates of their biomass are of significant value. Here, we measured single-cell biomass distributions of isolates from several numerically abundant marine bacterial groups, including Pelagibacter (SAR11), Prochlorococcus and Vibrio using a microfluidic mass sensor known as a suspended microchannel resonator (SMR). We show that the SMR can provide biomass (dry mass) measurements for cells spanning more than two orders of magnitude and that these estimates are consistent with other independent measures. We find that Pelagibacterales strain HTCC1062 has a median biomass of 11.9±0.7 fg per cell, which is five- to twelve-fold smaller than the median Prochlorococcus cell's biomass (depending upon strain) and nearly 100-fold lower than that of rapidly growing V. splendidus strain 13B01. Knowing the biomass contributions from various taxonomic groups will provide more precise estimates of total marine biomass, aiding models of nutrient flux in the ocean. PMID:27922599
NASA Astrophysics Data System (ADS)
Kit Luk, Chuen; Chesi, Graziano
2015-11-01
This paper addresses the estimation of the domain of attraction for discrete-time nonlinear systems where the vector field is subject to changes. First, the paper considers the case of switched systems, where the vector field is allowed to arbitrarily switch among the elements of a finite family. Second, the paper considers the case of hybrid systems, where the state space is partitioned into several regions described by polynomial inequalities, and the vector field is defined on each region independently from the other ones. In both cases, the problem consists of computing the largest sublevel set of a Lyapunov function included in the domain of attraction. An approach is proposed for solving this problem based on convex programming, which provides a guaranteed inner estimate of the sought sublevel set. The conservatism of the provided estimate can be decreased by increasing the size of the optimisation problem. Some numerical examples illustrate the proposed approach.
Direct single-cell biomass estimates for marine bacteria via Archimedes' principle.
Cermak, Nathan; Becker, Jamie W; Knudsen, Scott M; Chisholm, Sallie W; Manalis, Scott R; Polz, Martin F
2017-03-01
Microbes are an essential component of marine food webs and biogeochemical cycles, and therefore precise estimates of their biomass are of significant value. Here, we measured single-cell biomass distributions of isolates from several numerically abundant marine bacterial groups, including Pelagibacter (SAR11), Prochlorococcus and Vibrio using a microfluidic mass sensor known as a suspended microchannel resonator (SMR). We show that the SMR can provide biomass (dry mass) measurements for cells spanning more than two orders of magnitude and that these estimates are consistent with other independent measures. We find that Pelagibacterales strain HTCC1062 has a median biomass of 11.9±0.7 fg per cell, which is five- to twelve-fold smaller than the median Prochlorococcus cell's biomass (depending upon strain) and nearly 100-fold lower than that of rapidly growing V. splendidus strain 13B01. Knowing the biomass contributions from various taxonomic groups will provide more precise estimates of total marine biomass, aiding models of nutrient flux in the ocean.
Accuracy or precision: Implications of sample design and methodology on abundance estimation
Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.
2015-01-01
Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.
Madec, Simon; Baret, Fred; de Solan, Benoît; Thomas, Samuel; Dutartre, Dan; Jezequel, Stéphane; Hemmerlé, Matthieu; Colombeau, Gallian; Comar, Alexis
2017-01-01
The capacity of LiDAR and Unmanned Aerial Vehicles (UAVs) to provide plant height estimates as a high-throughput plant phenotyping trait was explored. An experiment over wheat genotypes conducted under well watered and water stress modalities was conducted. Frequent LiDAR measurements were performed along the growth cycle using a phénomobile unmanned ground vehicle. UAV equipped with a high resolution RGB camera was flying the experiment several times to retrieve the digital surface model from structure from motion techniques. Both techniques provide a 3D dense point cloud from which the plant height can be estimated. Plant height first defined as the z -value for which 99.5% of the points of the dense cloud are below. This provides good consistency with manual measurements of plant height (RMSE = 3.5 cm) while minimizing the variability along each microplot. Results show that LiDAR and structure from motion plant height values are always consistent. However, a slight under-estimation is observed for structure from motion techniques, in relation with the coarser spatial resolution of UAV imagery and the limited penetration capacity of structure from motion as compared to LiDAR. Very high heritability values ( H 2 > 0.90) were found for both techniques when lodging was not present. The dynamics of plant height shows that it carries pertinent information regarding the period and magnitude of the plant stress. Further, the date when the maximum plant height is reached was found to be very heritable ( H 2 > 0.88) and a good proxy of the flowering stage. Finally, the capacity of plant height as a proxy for total above ground biomass and yield is discussed.
NASA Astrophysics Data System (ADS)
Martinez, M.; Rocha, B.; Li, M.; Shi, G.; Beltempo, A.; Rutledge, R.; Yanishevsky, M.
2012-11-01
The National Research Council Canada (NRC) has worked on the development of structural health monitoring (SHM) test platforms for assessing the performance of sensor systems for load monitoring applications. The first SHM platform consists of a 5.5 m cantilever aluminum beam that provides an optimal scenario for evaluating the ability of a load monitoring system to measure bending, torsion and shear loads. The second SHM platform contains an added level of structural complexity, by consisting of aluminum skins with bonded/riveted stringers, typical of an aircraft lower wing structure. These two load monitoring platforms are well characterized and documented, providing loading conditions similar to those encountered during service. In this study, a micro-electro-mechanical system (MEMS) for acquiring data from triads of gyroscopes, accelerometers and magnetometers is described. The system was used to compute changes in angles at discrete stations along the platforms. The angles obtained from the MEMS were used to compute a second, third or fourth order degree polynomial surface from which displacements at every point could be computed. The use of a new Kalman filter was evaluated for angle estimation, from which displacements in the structure were computed. The outputs of the newly developed algorithms were then compared to the displacements obtained from the linear variable displacement transducers connected to the platforms. The displacement curves were subsequently post-processed either analytically, or with the help of a finite element model of the structure, to estimate strains and loads. The estimated strains were compared with baseline strain gauge instrumentation installed on the platforms. This new approach for load monitoring was able to provide accurate estimates of applied strains and shear loads.
Madec, Simon; Baret, Fred; de Solan, Benoît; Thomas, Samuel; Dutartre, Dan; Jezequel, Stéphane; Hemmerlé, Matthieu; Colombeau, Gallian; Comar, Alexis
2017-01-01
The capacity of LiDAR and Unmanned Aerial Vehicles (UAVs) to provide plant height estimates as a high-throughput plant phenotyping trait was explored. An experiment over wheat genotypes conducted under well watered and water stress modalities was conducted. Frequent LiDAR measurements were performed along the growth cycle using a phénomobile unmanned ground vehicle. UAV equipped with a high resolution RGB camera was flying the experiment several times to retrieve the digital surface model from structure from motion techniques. Both techniques provide a 3D dense point cloud from which the plant height can be estimated. Plant height first defined as the z-value for which 99.5% of the points of the dense cloud are below. This provides good consistency with manual measurements of plant height (RMSE = 3.5 cm) while minimizing the variability along each microplot. Results show that LiDAR and structure from motion plant height values are always consistent. However, a slight under-estimation is observed for structure from motion techniques, in relation with the coarser spatial resolution of UAV imagery and the limited penetration capacity of structure from motion as compared to LiDAR. Very high heritability values (H2> 0.90) were found for both techniques when lodging was not present. The dynamics of plant height shows that it carries pertinent information regarding the period and magnitude of the plant stress. Further, the date when the maximum plant height is reached was found to be very heritable (H2> 0.88) and a good proxy of the flowering stage. Finally, the capacity of plant height as a proxy for total above ground biomass and yield is discussed. PMID:29230229
NASA Astrophysics Data System (ADS)
Gupta, Pawan; Joiner, Joanna; Vasilkov, Alexander; Bhartia, Pawan K.
2016-07-01
Estimates of top-of-the-atmosphere (TOA) radiative flux are essential for the understanding of Earth's energy budget and climate system. Clouds, aerosols, water vapor, and ozone (O3) are among the most important atmospheric agents impacting the Earth's shortwave (SW) radiation budget. There are several sensors in orbit that provide independent information related to these parameters. Having coincident information from these sensors is important for understanding their potential contributions. The A-train constellation of satellites provides a unique opportunity to analyze data from several of these sensors. In this paper, retrievals of cloud/aerosol parameters and total column ozone (TCO) from the Aura Ozone Monitoring Instrument (OMI) have been collocated with the Aqua Clouds and Earth's Radiant Energy System (CERES) estimates of total reflected TOA outgoing SW flux (SWF). We use these data to develop a variety of neural networks that estimate TOA SWF globally over ocean and land using only OMI data and other ancillary information as inputs and CERES TOA SWF as the output for training purposes. OMI-estimated TOA SWF from the trained neural networks reproduces independent CERES data with high fidelity. The global mean daily TOA SWF calculated from OMI is consistently within ±1 % of CERES throughout the year 2007. Application of our neural network method to other sensors that provide similar retrieved parameters, both past and future, can produce similar estimates TOA SWF. For example, the well-calibrated Total Ozone Mapping Spectrometer (TOMS) series could provide estimates of TOA SWF dating back to late 1978.
NASA Technical Reports Server (NTRS)
Gupta, Pawan; Joiner, Joanna; Vasilkov, Alexander; Bhartia, Pawan K.
2016-01-01
Estimates of top-of-the-atmosphere (TOA) radiative flux are essential for the understanding of Earth's energy budget and climate system. Clouds, aerosols, water vapor, and ozone (O3) are among the most important atmospheric agents impacting the Earth's shortwave (SW) radiation budget. There are several sensors in orbit that provide independent information related to these parameters. Having coincident information from these sensors is important for understanding their potential contributions. The A-train constellation of satellites provides a unique opportunity to analyze data from several of these sensors. In this paper, retrievals of cloud/aerosol parameters and total column ozone (TCO) from the Aura Ozone Monitoring Instrument (OMI) have been collocated with the Aqua Clouds and Earth's Radiant Energy System (CERES) estimates of total reflected TOA outgoing SW flux (SWF). We use these data to develop a variety of neural networks that estimate TOA SWF globally over ocean and land using only OMI data and other ancillary information as inputs and CERES TOA SWF as the output for training purposes. OMI-estimated TOA SWF from the trained neural networks reproduces independent CERES data with high fidelity. The global mean daily TOA SWF calculated from OMI is consistently within 1% of CERES throughout the year 2007. Application of our neural network method to other sensors that provide similar retrieved parameters, both past and future, can produce similar estimates TOA SWF. For example, the well-calibrated Total Ozone Mapping Spectrometer (TOMS) series could provide estimates of TOA SWF dating back to late 1978.
Adaptive Video Streaming Using Bandwidth Estimation for 3.5G Mobile Network
NASA Astrophysics Data System (ADS)
Nam, Hyeong-Min; Park, Chun-Su; Jung, Seung-Won; Ko, Sung-Jea
Currently deployed mobile networks including High Speed Downlink Packet Access (HSDPA) offer only best-effort Quality of Service (QoS). In wireless best effort networks, the bandwidth variation is a critical problem, especially, for mobile devices with small buffers. This is because the bandwidth variation leads to packet losses caused by buffer overflow as well as picture freezing due to high transmission delay or buffer underflow. In this paper, in order to provide seamless video streaming over HSDPA, we propose an efficient real-time video streaming method that consists of the available bandwidth (AB) estimation for the HSDPA network and the transmission rate control to prevent buffer overflows/underflows. In the proposed method, the client estimates the AB and the estimated AB is fed back to the server through real-time transport control protocol (RTCP) packets. Then, the server adaptively adjusts the transmission rate according to the estimated AB and the buffer state obtained from the RTCP feedback information. Experimental results show that the proposed method achieves seamless video streaming over the HSDPA network providing higher video quality and lower transmission delay.
Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.
Leung, Denis H Y; Wang, You-Gan; Zhu, Min
2009-07-01
The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.
NASA Astrophysics Data System (ADS)
Naghibolhosseini, Maryam; Long, Glenis
2011-11-01
The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.
Improving size estimates of open animal populations by incorporating information on age
Manly, Bryan F.J.; McDonald, Trent L.; Amstrup, Steven C.; Regehr, Eric V.
2003-01-01
Around the world, a great deal of effort is expended each year to estimate the sizes of wild animal populations. Unfortunately, population size has proven to be one of the most intractable parameters to estimate. The capture-recapture estimation models most commonly used (of the Jolly-Seber type) are complicated and require numerous, sometimes questionable, assumptions. The derived estimates usually have large variances and lack consistency over time. In capture–recapture studies of long-lived animals, the ages of captured animals can often be determined with great accuracy and relative ease. We show how to incorporate age information into size estimates for open populations, where the size changes through births, deaths, immigration, and emigration. The proposed method allows more precise estimates of population size than the usual models, and it can provide these estimates from two sample occasions rather than the three usually required. Moreover, this method does not require specialized programs for capture-recapture data; researchers can derive their estimates using the logistic regression module in any standard statistical package.
Canopy near-infrared reflectance and terrestrial photosynthesis.
Badgley, Grayson; Field, Christopher B; Berry, Joseph A
2017-03-01
Global estimates of terrestrial gross primary production (GPP) remain highly uncertain, despite decades of satellite measurements and intensive in situ monitoring. We report a new approach for quantifying the near-infrared reflectance of terrestrial vegetation (NIR V ). NIR V provides a foundation for a new approach to estimate GPP that consistently untangles the confounding effects of background brightness, leaf area, and the distribution of photosynthetic capacity with depth in canopies using existing moderate spatial and spectral resolution satellite sensors. NIR V is strongly correlated with solar-induced chlorophyll fluorescence, a direct index of photons intercepted by chlorophyll, and with site-level and globally gridded estimates of GPP. NIR V makes it possible to use existing and future reflectance data as a starting point for accurately estimating GPP.
Regression analysis of longitudinal data with correlated censoring and observation times.
Li, Yang; He, Xin; Wang, Haiying; Sun, Jianguo
2016-07-01
Longitudinal data occur in many fields such as the medical follow-up studies that involve repeated measurements. For their analysis, most existing approaches assume that the observation or follow-up times are independent of the response process either completely or given some covariates. In practice, it is apparent that this may not be true. In this paper, we present a joint analysis approach that allows the possible mutual correlations that can be characterized by time-dependent random effects. Estimating equations are developed for the parameter estimation and the resulted estimators are shown to be consistent and asymptotically normal. The finite sample performance of the proposed estimators is assessed through a simulation study and an illustrative example from a skin cancer study is provided.
Canopy near-infrared reflectance and terrestrial photosynthesis
Badgley, Grayson; Field, Christopher B.; Berry, Joseph A.
2017-01-01
Global estimates of terrestrial gross primary production (GPP) remain highly uncertain, despite decades of satellite measurements and intensive in situ monitoring. We report a new approach for quantifying the near-infrared reflectance of terrestrial vegetation (NIRV). NIRV provides a foundation for a new approach to estimate GPP that consistently untangles the confounding effects of background brightness, leaf area, and the distribution of photosynthetic capacity with depth in canopies using existing moderate spatial and spectral resolution satellite sensors. NIRV is strongly correlated with solar-induced chlorophyll fluorescence, a direct index of photons intercepted by chlorophyll, and with site-level and globally gridded estimates of GPP. NIRV makes it possible to use existing and future reflectance data as a starting point for accurately estimating GPP. PMID:28345046
The epidemiological modelling of dysthymia: application for the Global Burden of Disease Study 2010.
Charlson, Fiona J; Ferrari, Alize J; Flaxman, Abraham D; Whiteford, Harvey A
2013-10-01
In order to capture the differences in burden between the subtypes of depression, the Global Burden of Disease 2010 Study for the first time estimated the burden of dysthymia and major depressive disorder separately from the previously used umbrella term 'unipolar depression'. A global summary of epidemiological parameters are necessary inputs in burden of disease calculations for 21 world regions, males and females and for the year 1990, 2005 and 2010. This paper reports findings from a systematic review of global epidemiological data and the subsequent development of an internally consistent epidemiological model of dysthymia. A systematic search was conducted to identify data sources for the prevalence, incidence, remission and excess-mortality of dysthymia using Medline, PsycINFO and EMBASE electronic databases and grey literature. DisMod-MR, a Bayesian meta-regression tool, was used to check the epidemiological parameters for internal consistency and to predict estimates for world regions with no or few data. The systematic review identified 38 studies meeting inclusion criteria which provided 147 data points for 30 countries in 13 of 21 world regions. Prevalence increases in the early ages, peaking at around 50 years. Females have higher prevalence of dysthymia than males. Global pooled prevalence remained constant across time points at 1.55% (95%CI 1.50-1.60). There was very little regional variation in prevalence estimates. There were eight GBD world regions for which we found no data for which DisMod-MR had to impute estimates. The addition of internally consistent epidemiological estimates by world region, age, sex and year for dysthymia contributed to a more comprehensive estimate of mental health burden in GBD 2010. © 2013 Elsevier B.V. All rights reserved.
Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling
NASA Astrophysics Data System (ADS)
Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.
2017-12-01
It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.
Modeling trends from North American Breeding Bird Survey data: a spatially explicit approach
Bled, Florent; Sauer, John R.; Pardieck, Keith L.; Doherty, Paul; Royle, J. Andy
2013-01-01
Population trends, defined as interval-specific proportional changes in population size, are often used to help identify species of conservation interest. Efficient modeling of such trends depends on the consideration of the correlation of population changes with key spatial and environmental covariates. This can provide insights into causal mechanisms and allow spatially explicit summaries at scales that are of interest to management agencies. We expand the hierarchical modeling framework used in the North American Breeding Bird Survey (BBS) by developing a spatially explicit model of temporal trend using a conditional autoregressive (CAR) model. By adopting a formal spatial model for abundance, we produce spatially explicit abundance and trend estimates. Analyses based on large-scale geographic strata such as Bird Conservation Regions (BCR) can suffer from basic imbalances in spatial sampling. Our approach addresses this issue by providing an explicit weighting based on the fundamental sample allocation unit of the BBS. We applied the spatial model to three species from the BBS. Species have been chosen based upon their well-known population change patterns, which allows us to evaluate the quality of our model and the biological meaning of our estimates. We also compare our results with the ones obtained for BCRs using a nonspatial hierarchical model (Sauer and Link 2011). Globally, estimates for mean trends are consistent between the two approaches but spatial estimates provide much more precise trend estimates in regions on the edges of species ranges that were poorly estimated in non-spatial analyses. Incorporating a spatial component in the analysis not only allows us to obtain relevant and biologically meaningful estimates for population trends, but also enables us to provide a flexible framework in order to obtain trend estimates for any area.
NASA Astrophysics Data System (ADS)
Hincks, Ian; Granade, Christopher; Cory, David G.
2018-01-01
The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.
Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William
2014-03-01
The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.
NASA Technical Reports Server (NTRS)
Rosch, E.
1975-01-01
The task of time estimation, an activity occasionally performed by pilots during actual flight, was investigated with the objective of providing human factors investigators with an unobtrusive and minimally loading additional task that is sensitive to differences in flying conditions and flight instrumentation associated with the main task of piloting an aircraft simulator. Previous research indicated that the duration and consistency of time estimates is associated with the cognitive, perceptual, and motor loads imposed by concurrent simple tasks. The relationships between the length and variability of time estimates and concurrent task variables under a more complex situation involving simulated flight were clarified. The wrap-around effect with respect to baseline duration, a consequence of mode switching at intermediate levels of concurrent task distraction, should contribute substantially to estimate variability and have a complex effect on the shape of the resulting distribution of estimates.
Information matrix estimation procedures for cognitive diagnostic models.
Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei
2018-03-06
Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.
LACIE performance predictor FOC users manual
NASA Technical Reports Server (NTRS)
1976-01-01
The LACIE Performance Predictor (LPP) is a computer simulation of the LACIE process for predicting worldwide wheat production. The simulation provides for the introduction of various errors into the system and provides estimates based on these errors, thus allowing the user to determine the impact of selected error sources. The FOC LPP simulates the acquisition of the sample segment data by the LANDSAT Satellite (DAPTS), the classification of the agricultural area within the sample segment (CAMS), the estimation of the wheat yield (YES), and the production estimation and aggregation (CAS). These elements include data acquisition characteristics, environmental conditions, classification algorithms, the LACIE aggregation and data adjustment procedures. The operational structure for simulating these elements consists of the following key programs: (1) LACIE Utility Maintenance Process, (2) System Error Executive, (3) Ephemeris Generator, (4) Access Generator, (5) Acquisition Selector, (6) LACIE Error Model (LEM), and (7) Post Processor.
James, Eric P.; Benjamin, Stanley G.; Marquis, Melinda
2016-10-28
A new gridded dataset for wind and solar resource estimation over the contiguous United States has been derived from hourly updated 1-h forecasts from the National Oceanic and Atmospheric Administration High-Resolution Rapid Refresh (HRRR) 3-km model composited over a three-year period (approximately 22 000 forecast model runs). The unique dataset features hourly data assimilation, and provides physically consistent wind and solar estimates for the renewable energy industry. The wind resource dataset shows strong similarity to that previously provided by a Department of Energy-funded study, and it includes estimates in southern Canada and northern Mexico. The solar resource dataset represents anmore » initial step towards application-specific fields such as global horizontal and direct normal irradiance. This combined dataset will continue to be augmented with new forecast data from the advanced HRRR atmospheric/land-surface model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittroth, F.
1979-09-01
A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples.
Spatio-temporal Granger causality: a new framework
Luo, Qiang; Lu, Wenlian; Cheng, Wei; Valdes-Sosa, Pedro A.; Wen, Xiaotong; Ding, Mingzhou; Feng, Jianfeng
2015-01-01
That physiological oscillations of various frequencies are present in fMRI signals is the rule, not the exception. Herein, we propose a novel theoretical framework, spatio-temporal Granger causality, which allows us to more reliably and precisely estimate the Granger causality from experimental datasets possessing time-varying properties caused by physiological oscillations. Within this framework, Granger causality is redefined as a global index measuring the directed information flow between two time series with time-varying properties. Both theoretical analyses and numerical examples demonstrate that Granger causality is a monotonically increasing function of the temporal resolution used in the estimation. This is consistent with the general principle of coarse graining, which causes information loss by smoothing out very fine-scale details in time and space. Our results confirm that the Granger causality at the finer spatio-temporal scales considerably outperforms the traditional approach in terms of an improved consistency between two resting-state scans of the same subject. To optimally estimate the Granger causality, the proposed theoretical framework is implemented through a combination of several approaches, such as dividing the optimal time window and estimating the parameters at the fine temporal and spatial scales. Taken together, our approach provides a novel and robust framework for estimating the Granger causality from fMRI, EEG, and other related data. PMID:23643924
Yong, Alan K.; Hough, Susan E.; Iwahashi, Junko; Braverman, Amy
2012-01-01
We present an approach based on geomorphometry to predict material properties and characterize site conditions using the VS30 parameter (time‐averaged shear‐wave velocity to a depth of 30 m). Our framework consists of an automated terrain classification scheme based on taxonomic criteria (slope gradient, local convexity, and surface texture) that systematically identifies 16 terrain types from 1‐km spatial resolution (30 arcsec) Shuttle Radar Topography Mission digital elevation models (SRTM DEMs). Using 853 VS30 values from California, we apply a simulation‐based statistical method to determine the mean VS30 for each terrain type in California. We then compare the VS30 values with models based on individual proxies, such as mapped surface geology and topographic slope, and show that our systematic terrain‐based approach consistently performs better than semiempirical estimates based on individual proxies. To further evaluate our model, we apply our California‐based estimates to terrains of the contiguous United States. Comparisons of our estimates with 325 VS30 measurements outside of California, as well as estimates based on the topographic slope model, indicate our method to be statistically robust and more accurate. Our approach thus provides an objective and robust method for extending estimates of VS30 for regions where in situ measurements are sparse or not readily available.
Volume estimation using food specific shape templates in mobile image-based dietary assessment
NASA Astrophysics Data System (ADS)
Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.
2011-03-01
As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.
A robust design mark-resight abundance estimator allowing heterogeneity in resighting probabilities
McClintock, B.T.; White, Gary C.; Burnham, K.P.
2006-01-01
This article introduces the beta-binomial estimator (BBE), a closed-population abundance mark-resight model combining the favorable qualities of maximum likelihood theory and the allowance of individual heterogeneity in sighting probability (p). The model may be parameterized for a robust sampling design consisting of multiple primary sampling occasions where closure need not be met between primary occasions. We applied the model to brown bear data from three study areas in Alaska and compared its performance to the joint hypergeometric estimator (JHE) and Bowden's estimator (BOWE). BBE estimates suggest heterogeneity levels were non-negligible and discourage the use of JHE for these data. Compared to JHE and BOWE, confidence intervals were considerably shorter for the AICc model-averaged BBE. To evaluate the properties of BBE relative to JHE and BOWE when sample sizes are small, simulations were performed with data from three primary occasions generated under both individual heterogeneity and temporal variation in p. All models remained consistent regardless of levels of variation in p. In terms of precision, the AICc model-averaged BBE showed advantages over JHE and BOWE when heterogeneity was present and mean sighting probabilities were similar between primary occasions. Based on the conditions examined, BBE is a reliable alternative to JHE or BOWE and provides a framework for further advances in mark-resight abundance estimation. ?? 2006 American Statistical Association and the International Biometric Society.
Analysis and Management of Animal Populations: Modeling, Estimation and Decision Making
Williams, B.K.; Nichols, J.D.; Conroy, M.J.
2002-01-01
This book deals with the processes involved in making informed decisions about the management of animal populations. It covers the modeling of population responses to management actions, the estimation of quantities needed in the modeling effort, and the application of these estimates and models to the development of sound management decisions. The book synthesizes and integrates in a single volume the methods associated with these themes, as they apply to ecological assessment and conservation of animal populations. KEY FEATURES * Integrates population modeling, parameter estimation and * decision-theoretic approaches to management in a single, cohesive framework * Provides authoritative, state-of-the-art descriptions of quantitative * approaches to modeling, estimation and decision-making * Emphasizes the role of mathematical modeling in the conduct of science * and management * Utilizes a unifying biological context, consistent mathematical notation, * and numerous biological examples
NASA Technical Reports Server (NTRS)
Tomaine, R. L.
1976-01-01
Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.
In search of a corrected prescription drug elasticity estimate: a meta-regression approach.
Gemmill, Marin C; Costa-Font, Joan; McGuire, Alistair
2007-06-01
An understanding of the relationship between cost sharing and drug consumption depends on consistent and unbiased price elasticity estimates. However, there is wide heterogeneity among studies, which constrains the applicability of elasticity estimates for empirical purposes and policy simulation. This paper attempts to provide a corrected measure of the drug price elasticity by employing meta-regression analysis (MRA). The results indicate that the elasticity estimates are significantly different from zero, and the corrected elasticity is -0.209 when the results are made robust to heteroskedasticity and clustering of observations. Elasticity values are higher when the study was published in an economic journal, when the study employed a greater number of observations, and when the study used aggregate data. Elasticity estimates are lower when the institutional setting was a tax-based health insurance system.
Fuel Burn Estimation Using Real Track Data
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.
2011-01-01
A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.
Woody, Carol Ann; Johnson, D.H.; Shrier, Brianna M.; O'Neal, Jennifer S.; Knutzen, John A.; Augerot, Xanthippe; O'Neal, Thomas A.; Pearsons, Todd N.
2007-01-01
Counting towers provide an accurate, low-cost, low-maintenance, low-technology, and easily mobilized escapement estimation program compared to other methods (e.g., weirs, hydroacoustics, mark-recapture, and aerial surveys) (Thompson 1962; Siebel 1967; Cousens et al. 1982; Symons and Waldichuk 1984; Anderson 2000; Alaska Department of Fish and Game 2003). Counting tower data has been found to be consistent with that of digital video counts (Edwards 2005). Counting towers do not interfere with natural fish migration patterns, nor are fish handled or stressed; however, their use is generally limited to clear rivers that meet specific site selection criteria. The data provided by counting tower sampling allow fishery managers to determine reproductive population size, estimate total return (escapement + catch) and its uncertainty, evaluate population productivity and trends, set harvest rates, determine spawning escapement goals, and forecast future returns (Alaska Department of Fish and Game 1974-2000 and 1975-2004). The number of spawning fish is determined by subtracting subsistence, sport-caught fish, and prespawn mortality from the total estimated escapement. The methods outlined in this protocol for tower counts can be used to provide reasonable estimates ( plus or minus 6%-10%) of reproductive salmon population size and run timing in clear rivers.
The tomato genome sequence provides insight into fleshy fruit evolution
USDA-ARS?s Scientific Manuscript database
The genome of the inbred tomato cultivar ‘Heinz 1706’ was sequenced and assembled using a combination of Sanger and “next generation” technologies. The predicted genome size is ~900 Mb, consistent with prior estimates, of which 760 Mb were assembled in 91 scaffolds aligned to the 12 tomato chromosom...
Preliminary results of the global forest biomass survey
S. Healey; E. Lindquist
2014-01-01
Many countries do not yet have well-established national forest inventories, and among those that do, significant methodological differences exist, particularly in the estimation of standing forest biomass. Global space-based LiDAR (Light Detection and Ranging) from NASAâs now-completed ICESat mission provided consistent, high-quality measures of canopy height and...
Hoffmann, Sandra; Devleesschauwer, Brecht; Aspinall, Willy; Cooke, Roger; Corrigan, Tim; Havelaar, Arie; Angulo, Frederick; Gibb, Herman; Kirk, Martyn; Lake, Robin; Speybroeck, Niko; Torgerson, Paul; Hald, Tine
2017-01-01
Recently the World Health Organization, Foodborne Disease Burden Epidemiology Reference Group (FERG) estimated that 31 foodborne diseases (FBDs) resulted in over 600 million illnesses and 420,000 deaths worldwide in 2010. Knowing the relative role importance of different foods as exposure routes for key hazards is critical to preventing illness. This study reports the findings of a structured expert elicitation providing globally comparable food source attribution estimates for 11 major FBDs in each of 14 world subregions. We used Cooke's Classical Model to elicit and aggregate judgments of 73 international experts. Judgments were elicited from each expert individually and aggregated using both equal and performance weights. Performance weighted results are reported as they increased the informativeness of estimates, while retaining accuracy. We report measures of central tendency and uncertainty bounds on food source attribution estimate. For some pathogens we see relatively consistent food source attribution estimates across subregions of the world; for others there is substantial regional variation. For example, for non-typhoidal salmonellosis, pork was of minor importance compared to eggs and poultry meat in the American and African subregions, whereas in the European and Western Pacific subregions the importance of these three food sources were quite similar. Our regional results broadly agree with estimates from earlier European and North American food source attribution research. As in prior food source attribution research, we find relatively wide uncertainty bounds around our median estimates. We present the first worldwide estimates of the proportion of specific foodborne diseases attributable to specific food exposure routes. While we find substantial uncertainty around central tendency estimates, we believe these estimates provide the best currently available basis on which to link FBDs and specific foods in many parts of the world, providing guidance for policy actions to control FBDs.
Gorfine, Malka; Bordo, Nadia; Hsu, Li
2017-01-01
Summary Consider a popular case–control family study where individuals with a disease under study (case probands) and individuals who do not have the disease (control probands) are randomly sampled from a well-defined population. Possibly right-censored age at onset and disease status are observed for both probands and their relatives. For example, case probands are men diagnosed with prostate cancer, control probands are men free of prostate cancer, and the prostate cancer history of the fathers of the probands is also collected. Inherited genetic susceptibility, shared environment, and common behavior lead to correlation among the outcomes within a family. In this article, a novel nonparametric estimator of the marginal survival function is provided. The estimator is defined in the presence of intra-cluster dependence, and is based on consistent smoothed kernel estimators of conditional survival functions. By simulation, it is shown that the proposed estimator performs very well in terms of bias. The utility of the estimator is illustrated by the analysis of case–control family data of early onset prostate cancer. To our knowledge, this is the first article that provides a fully nonparametric marginal survival estimator based on case–control clustered age-at-onset data. PMID:27436674
Stretchable, Flexible, Scalable Smart Skin Sensors for Robotic Position and Force Estimation.
O'Neill, John; Lu, Jason; Dockter, Rodney; Kowalewski, Timothy
2018-03-23
The design and validation of a continuously stretchable and flexible skin sensor for collaborative robotic applications is outlined. The skin consists of a PDMS skin doped with Carbon Nanotubes and the addition of conductive fabric, connected by only five wires to a simple microcontroller. The accuracy is characterized in position as well as force, and the skin is also tested under uniaxial stretch. There are also two examples of practical implementations in collaborative robotic applications. The stationary position estimate has an RMSE of 7.02 mm, and the sensor error stays within 2.5 ± 1.5 mm even under stretch. The skin consistently provides an emergency stop command at only 0.5 N of force and is shown to maintain a collaboration force of 10 N in a collaborative control experiment.
Source Detection with Bayesian Inference on ROSAT All-Sky Survey Data Sample
NASA Astrophysics Data System (ADS)
Guglielmetti, F.; Voges, W.; Fischer, R.; Boese, G.; Dose, V.
2004-07-01
We employ Bayesian inference for the joint estimation of sources and background on ROSAT All-Sky Survey (RASS) data. The probabilistic method allows for detection improvement of faint extended celestial sources compared to the Standard Analysis Software System (SASS). Background maps were estimated in a single step together with the detection of sources without pixel censoring. Consistent uncertainties of background and sources are provided. The source probability is evaluated for single pixels as well as for pixel domains to enhance source detection of weak and extended sources.
Combined Heat and Power Market Potential for Opportunity Fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, David; Lemar, Paul
This report estimates the potential for opportunity fuel combined heat and power (CHP) applications in the United States, and provides estimates for the technical and economic market potential compared to those included in an earlier report. An opportunity fuel is any type of fuel that is not widely used when compared to traditional fossil fuels. Opportunity fuels primarily consist of biomass fuels, industrial waste products and fossil fuel derivatives. These fuels have the potential to be an economically viable source of power generation in various CHP applications.
Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study
NASA Astrophysics Data System (ADS)
Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie
2008-06-01
Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.
Measurement properties of the WOMAC LK 3.1 pain scale.
Stratford, P W; Kennedy, D M; Woodhouse, L J; Spadoni, G F
2007-03-01
The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) is applied extensively to patients with osteoarthritis of the hip or knee. Previous work has challenged the validity of its physical function scale however an extensive evaluation of its pain scale has not been reported. Our purpose was to estimate internal consistency, factorial validity, test-retest reliability, and the standard error of measurement (SEM) of the WOMAC LK 3.1 pain scale. Four hundred and seventy-four patients with osteoarthritis of the hip or knee awaiting arthroplasty were administered the WOMAC. Estimates of internal consistency (coefficient alpha), factorial validity (confirmatory factor analysis), and the SEM based on internal consistency (SEM(IC)) were obtained. Test-retest reliability [Type 2,1 intraclass correlation coefficients (ICC)] and a corresponding SEM(TRT) were estimated on a subsample of 36 patients. Our estimates were: internal consistency alpha=0.84; SEM(IC)=1.48; Type 2,1 ICC=0.77; SEM(TRT)=1.69. Confirmatory factor analysis failed to support a single factor structure of the pain scale with uncorrelated error terms. Two comparable models provided excellent fit: (1) a model with correlated error terms between the walking and stairs items, and between night and sit items (chi2=0.18, P=0.98); (2) a two factor model with walking and stairs items loading on one factor, night and sit items loading on a second factor, and the standing item loading on both factors (chi2=0.18, P=0.98). Our examination of the factorial structure of the WOMAC pain scale failed to support a single factor and internal consistency analysis yielded a coefficient less than optimal for individual patient use. An alternate strategy to summing the five-item responses when considering individual patient application would be to interpret item responses separately or to sum only those items which display homogeneity.
Benchmarking real-time RGBD odometry for light-duty UAVs
NASA Astrophysics Data System (ADS)
Willis, Andrew R.; Sahawneh, Laith R.; Brink, Kevin M.
2016-06-01
This article describes the theoretical and implementation challenges associated with generating 3D odometry estimates (delta-pose) from RGBD sensor data in real-time to facilitate navigation in cluttered indoor environments. The underlying odometry algorithm applies to general 6DoF motion; however, the computational platforms, trajectories, and scene content are motivated by their intended use on indoor, light-duty UAVs. Discussion outlines the overall software pipeline for sensor processing and details how algorithm choices for the underlying feature detection and correspondence computation impact the real-time performance and accuracy of the estimated odometry and associated covariance. This article also explores the consistency of odometry covariance estimates and the correlation between successive odometry estimates. The analysis is intended to provide users information needed to better leverage RGBD odometry within the constraints of their systems.
A New Monte Carlo Method for Estimating Marginal Likelihoods.
Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O
2018-06-01
Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.
Polar Motion Constraints on Models of the Fortnightly Tide
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Egbert, G. D.; Smith, David E. (Technical Monitor)
2002-01-01
Estimates of the near-fortnightly Mf ocean tide from Topex/Poseidon satellite altimetry and from numerical solutions to the shallow water equations agree reasonably well, at least in their basin-scale features. For example, both show that the Pacific Ocean tide lags the Atlantic tide by roughly 30 degrees. There are hints of finer scale agreements in the elevation fields, but noise levels are high. In contrast, estimates of Mf currents are only weakly constrained by the TP data, because high-wavenumber Rossby waves (with intense currents) are associated with relatively small perturbations in surface elevation. As a result, a wide range of Mf current fields are consistent with both the TP data and the hydrodynamic equations within a priori plausible misfit bounds. We find that a useful constraint on the Mf currents is provided by independent estimates of the Earth's polar motion. At the Mf period polar motion shows a weak signal (both prograde and retrograde) which must be almost entirely caused by the ocean tide. We have estimated this signal from the SPACE2000 time series, after applying a broad-band correction for atmospheric angular momentum. Although the polar motion estimates have relatively large uncertainties, they are sufficiently precise to fix optimum data weights in a global ocean inverse model of Mf. These weights control the tradeoff between fitting a prior hydrodynamic model of Mf and fitting the relatively noisy T/P measurements of Mf. The predicted polar motion from the final inverse model agrees remarkably well with the Mf polar motion observations. The preferred model is also consistent with noise levels suggested by island gauges, and it is marginally consistent with differences observed by subsetting the altimetry (to the small extent that this is possible). In turn, this new model of the Mf ocean tide allows the ocean component to be removed from Mf estimates of length of day, thus yielding estimates of complex Love numbers less contaminated by oceanic effects than has hitherto been possible.
NASA Astrophysics Data System (ADS)
Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim
2016-08-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKFOSA. Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25 % more accurate state and parameter estimations than the joint and dual approaches.
Novel applications of the temporal kernel method: Historical and future radiative forcing
NASA Astrophysics Data System (ADS)
Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.
2017-12-01
We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChant, Lawrence Justin; Smith, Justin A.
Here we discuss an improved Corcos (Corcos (1963), (1963)) style cross spectral density utilizing zero pressure gradient, supersonic (Beresh et. al. (2013)) data sets. Using the connection between narrow band measurements with broadband cross-spectral density, i.e. Γ(ξ ,η ,ω )= Φ (ω) A(ωη/U )exp (-i ωξ/U) we focus on estimating coherence expressions of the form: A (ξω nb/U) and B (ηω nb/ U) where ω nb denotes the narrow band frequency, i.e. the band center frequency value and ξ and η are sensors spacing in streamwise/longitudinal and cross-stream/lateral directions, respectively. A methodology to estimate the parameters which retains the Corcosmore » exponential functional form, A(ξω/U)=exp(-k lat ηω/U) but identifies new parameters (constants) consistent with the Beresh et. al. data sets is discussed. The Corcos result requires that the data be properly explained by self-similar variable: ξω/U and ηω/U. The longitudinal (streamwise) variable ξω/U tends to provide a better data collapse, while, consistent with the literature the lateral ηω/U is only successful for higher band center frequencies. Assuming the similarity variables provide a useful description of the data, the longitudinal coherence decay constant result using the Beresh et. al. data sets yields a value for the longitudinal constant k long≈0.36-0.28 that is approximately 3x larger than the “traditional” (low speed, large Reynolds number and zero pressure gradient) of k long≈0.11. We suggest that the most likely reason that the Beresh et. al. data sets incur increased longitudinal decay which results in reduced coherence lengths is due to wall shear induced compression causing an adverse pressure gradient. Focusing on the higher band center frequency measurements where the frequency dependent similarity variables are applicable, the lateral or transverse coherence decay constant k lat≈0.7 is consistent with the “traditional” (low speed, large Reynolds number and zero pressure gradient). It should be noted, that the longitudinal/streamwise coherence decay deviates from the value observed by other researchers while the lateral/ cross-stream value is consistent has been observed by other researchers. We believe that while the measurements used to obtain new decay constant estimates are from internal wind tunnel tests, they likely provide a useful estimate expected reentry flow behavior and are therefore recommended for use. These data could also be useful in determining the uncertainty of correlation length for a uncertainty quantification (UQ) analysis.« less
Chan, Woei-Leong; Hsiao, Fei-Bin
2011-01-01
This paper presents a complete procedure for sensor compatibility correction of a fixed-wing Unmanned Air Vehicle (UAV). The sensors consist of a differential air pressure transducer for airspeed measurement, two airdata vanes installed on an airdata probe for angle of attack (AoA) and angle of sideslip (AoS) measurement, and an Attitude and Heading Reference System (AHRS) that provides attitude angles, angular rates, and acceleration. The procedure is mainly based on a two pass algorithm called the Rauch-Tung-Striebel (RTS) smoother, which consists of a forward pass Extended Kalman Filter (EKF) and a backward recursion smoother. On top of that, this paper proposes the implementation of the Wiener Type Filter prior to the RTS in order to avoid the complicated process noise covariance matrix estimation. Furthermore, an easy to implement airdata measurement noise variance estimation method is introduced. The method estimates the airdata and subsequently the noise variances using the ground speed and ascent rate provided by the Global Positioning System (GPS). It incorporates the idea of data regionality by assuming that some sort of statistical relation exists between nearby data points. Root mean square deviation (RMSD) is being employed to justify the sensor compatibility. The result shows that the presented procedure is easy to implement and it improves the UAV sensor data compatibility significantly. PMID:22163819
Chan, Woei-Leong; Hsiao, Fei-Bin
2011-01-01
This paper presents a complete procedure for sensor compatibility correction of a fixed-wing Unmanned Air Vehicle (UAV). The sensors consist of a differential air pressure transducer for airspeed measurement, two airdata vanes installed on an airdata probe for angle of attack (AoA) and angle of sideslip (AoS) measurement, and an Attitude and Heading Reference System (AHRS) that provides attitude angles, angular rates, and acceleration. The procedure is mainly based on a two pass algorithm called the Rauch-Tung-Striebel (RTS) smoother, which consists of a forward pass Extended Kalman Filter (EKF) and a backward recursion smoother. On top of that, this paper proposes the implementation of the Wiener Type Filter prior to the RTS in order to avoid the complicated process noise covariance matrix estimation. Furthermore, an easy to implement airdata measurement noise variance estimation method is introduced. The method estimates the airdata and subsequently the noise variances using the ground speed and ascent rate provided by the Global Positioning System (GPS). It incorporates the idea of data regionality by assuming that some sort of statistical relation exists between nearby data points. Root mean square deviation (RMSD) is being employed to justify the sensor compatibility. The result shows that the presented procedure is easy to implement and it improves the UAV sensor data compatibility significantly.
Stevens, Stewart G.; Brown, Chris M
2013-01-01
Recently large scale transcriptome and proteome datasets for human cells have become available. A striking finding from these studies is that the level of an mRNA typically predicts no more than 40% of the abundance of protein. This correlation represents the overall figure for all genes. We present here a bioinformatic analysis of translation efficiency – the rate at which mRNA is translated into protein. We have analysed those human datasets that include genome wide mRNA and protein levels determined in the same study. The analysis comprises five distinct human cell lines that together provide comparable data for 8,170 genes. For each gene we have used levels of mRNA and protein combined with protein stability data from the HeLa cell line to estimate translation efficiency. This was possible for 3,990 genes in one or more cell lines and 1,807 genes in all five cell lines. Interestingly, our analysis and modelling shows that for many genes this estimated translation efficiency has considerable consistency between cell lines. Some deviations from this consistency likely result from the regulation of protein degradation. Others are likely due to known translational control mechanisms. These findings suggest it will be possible to build improved models for the interpretation of mRNA expression data. The results we present here provide a view of translation efficiency for many genes. We provide an online resource allowing the exploration of translation efficiency in genes of interest within different cell lines (http://bioanalysis.otago.ac.nz/TranslationEfficiency). PMID:23460887
Volcanic forcing for climate modeling: a new microphysics-based data set covering years 1600-present
NASA Astrophysics Data System (ADS)
Arfeuille, F.; Weisenstein, D.; Mack, H.; Rozanov, E.; Peter, T.; Brönnimann, S.
2014-02-01
As the understanding and representation of the impacts of volcanic eruptions on climate have improved in the last decades, uncertainties in the stratospheric aerosol forcing from large eruptions are now linked not only to visible optical depth estimates on a global scale but also to details on the size, latitude and altitude distributions of the stratospheric aerosols. Based on our understanding of these uncertainties, we propose a new model-based approach to generating a volcanic forcing for general circulation model (GCM) and chemistry-climate model (CCM) simulations. This new volcanic forcing, covering the 1600-present period, uses an aerosol microphysical model to provide a realistic, physically consistent treatment of the stratospheric sulfate aerosols. Twenty-six eruptions were modeled individually using the latest available ice cores aerosol mass estimates and historical data on the latitude and date of eruptions. The evolution of aerosol spatial and size distribution after the sulfur dioxide discharge are hence characterized for each volcanic eruption. Large variations are seen in hemispheric partitioning and size distributions in relation to location/date of eruptions and injected SO2 masses. Results for recent eruptions show reasonable agreement with observations. By providing these new estimates of spatial distributions of shortwave and long-wave radiative perturbations, this volcanic forcing may help to better constrain the climate model responses to volcanic eruptions in the 1600-present period. The final data set consists of 3-D values (with constant longitude) of spectrally resolved extinction coefficients, single scattering albedos and asymmetry factors calculated for different wavelength bands upon request. Surface area densities for heterogeneous chemistry are also provided.
Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.
2017-01-01
Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892
NASA Astrophysics Data System (ADS)
Hayward, Bruce W.; Grenfell, Hugh R.; Sabaa, Ashwaq T.; Kay, Jon; Daymond-King, Rhiannon; Cochran, Ursula
2010-03-01
This paper provides the first solid evidence in support of a century-old hypothesis that the mountainous Marlborough Sounds region in central New Zealand is subsiding. More recent hypotheses suggest that this may be a result of southward migration of a slab of subducted Pacific Plate causing flexural downwarping of the overlying crust in the vicinity of the transition between subduction and strike-slip on the Pacific-Australian plate boundary. The proxy evidence for gradual Holocene subsidence comes from micropaleontological study of seven intertidal sediment cores from the inner Marlborough Sounds (at Havelock, Mahau Sound and Shakespeare Bay). Quantitative estimates (using Modern Analogue Technique) of former tidal elevations based on fossil foraminiferal faunas provide evidence of tectonic (not compaction-related) subsidence in all cores. Estimates of subsidence rates for individual cores vary within the range 0.2-2.4 m ka -1. The wide variation within subsidence rate estimates are related to a combination of the accuracy limits of radiocarbon dates, elevation estimates, and particularly our poor knowledge of the New Zealand Holocene sea-level curve. The most consistent subsidence rate at all three sites for the mid-late Holocene (last 6-7 ka) is ˜0.7-0.8 m ka -1. This rate is consistent with the average subsidence rate in the adjacent 4-km thick Wanganui sedimentary basin for the last 5 myr. Subsidence is inferred to have migrated southwards from the Wanganui Basin to impinge on the inner Marlborough Sounds in just the last 100-200 ka.
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
Government conceptual estimating for contracting and management
NASA Technical Reports Server (NTRS)
Brown, J. A.
1986-01-01
The use of the Aerospace Price Book, a cost index, and conceptual cost estimating for cost-effective design and construction of space facilities is discussed. The price book consists of over 200 commonly used conceptual elements and 100 systems summaries of projects such as launch pads, processing facilities, and air locks. The cost index is composed of three divisions: (1) bid summaries of major Shuttle projects, (2) budget cost data sheets, and (3) cost management summaries; each of these divisions is described. Conceptual estimates of facilities and ground support equipment are required to provide the most probable project cost for budget, funding, and project approval purposes. Similar buildings, systems, and elements already designed are located in the cost index in order to make the best rough order of magnitude conceptual estimates for development of Space Shuttle facilities. An example displaying the applicability of the conceptual cost estimating procedure for the development of the KSC facilities is presented.
Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model
Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.
2012-01-01
Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315
Comparative mean and extreme statistics for the TMPA and GPCP 1DD
NASA Astrophysics Data System (ADS)
Huffman, George; Adler, Robert; Bolvin, David; Nelkin, Eric
2010-05-01
The TRMM Multi-satellite Precipitation Analysis (TMPA) provides 0.25° x0.25° 3-hourly estimates of precipitation in the latitude band 50° N-50° S for the years 1998-present, while the GEWEX/Global Precipitation Climatology Project (GPCP) One-Degree Daily (1DD) precipitation product provides 1° x1° daily global estimates of precipitation for 1997-present. The TMPA incorporates all available (intercalibrated) microwave estimates of precipitation in addition to microwave-calibrated infrared (IR) estimates, while the 1DD consists of microwave-calibrated IR estimates in the band 40° N-40° S and TOVS (or AIRS) sounding-based estimates at higher latitudes. Both datasets are scaled by monthly raingauge analyses, but it should be emphasized that the day-to-day occurrence of precipitation is entirely based on the satellite data. Although the 1DD is somewhat more approximate than the TMPA, the 1DD can provide an important check on the mean and extreme results computed using the TMPA. In addition, the 1DD can provide results over the entire globe, while the TMPA only covers the tropics and mid-latitudes. Finally, the 1DD captures the entire 1997-1998 El Niño, while the TMPA only captures it from the beginning of 1998. The analysis presented here focuses on basic parameters that are stable and well-suited to comparison with station data or model estimates. These include means, frequency of precipitation, 95th percentile values, and the longest spans of consecutive dry days in a year. Both datasets are compared against a representative sample of stations around the globe for the available overlap period of 1998-2003. Overall, there is fair consistency between the 1DD and TMPA datasets, even accounting for differences in spatial scale. In addition to enhancing our confidence in the results previously reported, this comparison allows us to examine issues that are inherent in the two datasets. For example, the 1DD typically shows anomalously high fractional coverage in the latitude bands 40-50° N and 40-50° S. A review of the algorithm shows that this artifact results from a smoothing operator that is applied at these latitude bands to accommodate the transition from IR-based to sounding-based estimates. As well, the TMPA tends to have drier estimates than the 1DD at higher latitudes, ~40-50° , particularly in the winter hemisphere, where the microwave algorithms currently lack sensitivity to the reduced precipitation signals. The characteristic behavior of precipitation in the additional time/space coverage provided by the 1DD will be examined, considering its performance in the time/space overlap with the TMPA and available gauge data. The 1997 data provide crucial information about the early and middle phases of the significant 1997-1998 El Niño. The high-latitude results could be important for helping assess the conditions that the joint NASA/JAXA Global Precipitation Measurement (GPM) mission will observe.
Consistency assessment of rating curve data in various locations using Bidirectional Reach (BReach)
NASA Astrophysics Data System (ADS)
Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Coxon, Gemma; Freer, Jim; Verhoest, Niko E. C.
2017-10-01
When estimating discharges through rating curves, temporal data consistency is a critical issue. In this research, consistency in stage-discharge data is investigated using a methodology called Bidirectional Reach (BReach), which departs from a (in operational hydrology) commonly used definition of consistency. A period is considered to be consistent if no consecutive and systematic deviations from a current situation occur that exceed observational uncertainty. Therefore, the capability of a rating curve model to describe a subset of the (chronologically sorted) data is assessed in each observation by indicating the outermost data points for which the rating curve model behaves satisfactorily. These points are called the maximum left or right reach, depending on the direction of the investigation. This temporal reach should not be confused with a spatial reach (indicating a part of a river). Changes in these reaches throughout the data series indicate possible changes in data consistency and if not resolved could introduce additional errors and biases. In this research, various measurement stations in the UK, New Zealand and Belgium are selected based on their significant historical ratings information and their specific characteristics related to data consistency. For each country, regional information is maximally used to estimate observational uncertainty. Based on this uncertainty, a BReach analysis is performed and, subsequently, results are validated against available knowledge about the history and behavior of the site. For all investigated cases, the methodology provides results that appear to be consistent with this knowledge of historical changes and thus facilitates a reliable assessment of (in)consistent periods in stage-discharge measurements. This assessment is not only useful for the analysis and determination of discharge time series, but also to enhance applications based on these data (e.g., by informing hydrological and hydraulic model evaluation design about consistent time periods to analyze).
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Schmidt, C. C.; Hoffman, J.; Giglio, L.; Peterson, D. A.
2013-12-01
Polar and geostationary satellites are used operationally for fire detection and smoke source estimation by many near-real-time operational users, including operational forecast centers around the globe. The input satellite radiance data are processed by data providers to produce Level-2 and Level -3 fire detection products, but processing these data into spatially and temporally consistent estimates of fire activity requires a substantial amount of additional processing. The most significant processing steps are correction for variable coverage of the satellite observations, and correction for conditions that affect the detection efficiency of the satellite sensors. We describe a system developed by the Naval Research Laboratory (NRL) that uses the full raster information from the entire constellation to diagnose detection opportunities, calculate corrections for factors such as angular dependence of detection efficiency, and generate global estimates of fire activity at spatial and temporal scales suitable for atmospheric modeling. By incorporating these improved fire observations, smoke emissions products, such as NRL's FLAMBE, are able to produce improved estimates of global emissions. This talk provides an overview of the system, demonstrates the achievable improvement over older methods, and describes challenges for near-real-time implementation.
Nagata, Tomohisa; Mori, Koji; Aratake, Yutaka; Ide, Hiroshi; Ishida, Hiromi; Nobori, Junichiro; Kojima, Reiko; Odagami, Kiminori; Kato, Anna; Tsutsumi, Akizumi; Matsuda, Shinya
2014-01-01
The aim of the present study was to develop standardized cost estimation tools that provide information to employers about occupational safety and health (OSH) activities for effective and efficient decision making in Japanese companies. We interviewed OSH staff members including full-time professional occupational physicians to list all OSH activities. Using activity-based costing, cost data were obtained from retrospective analyses of occupational safety and health costs over a 1-year period in three manufacturing workplaces and were obtained from retrospective analyses of occupational health services costs in four manufacturing workplaces. We verified the tools additionally in four workplaces including service businesses. We created the OSH and occupational health standardized cost estimation tools. OSH costs consisted of personnel costs, expenses, outsourcing costs and investments for 15 OSH activities. The tools provided accurate, relevant information on OSH activities and occupational health services. The standardized information obtained from our OSH and occupational health cost estimation tools can be used to manage OSH costs, make comparisons of OSH costs between companies and organizations and help occupational health physicians and employers to determine the best course of action.
NASA Astrophysics Data System (ADS)
Newman, Andrew B.; Smith, Russell J.; Conroy, Charlie; Villaume, Alexa; van Dokkum, Pieter
2017-08-01
We present new observations of the three nearest early-type galaxy (ETG) strong lenses discovered in the SINFONI Nearby Elliptical Lens Locator Survey (SNELLS). Based on their lensing masses, these ETGs were inferred to have a stellar initial mass function (IMF) consistent with that of the Milky Way, not the bottom-heavy IMF that has been reported as typical for high-σ ETGs based on lensing, dynamical, and stellar population synthesis techniques. We use these unique systems to test the consistency of IMF estimates derived from different methods. We first estimate the stellar M */L using lensing and stellar dynamics. We then fit high-quality optical spectra of the lenses using an updated version of the stellar population synthesis models developed by Conroy & van Dokkum. When examined individually, we find good agreement among these methods for one galaxy. The other two galaxies show 2-3σ tension with lensing estimates, depending on the dark matter contribution, when considering IMFs that extend to 0.08 M ⊙. Allowing a variable low-mass cutoff or a nonparametric form of the IMF reduces the tension among the IMF estimates to <2σ. There is moderate evidence for a reduced number of low-mass stars in the SNELLS spectra, but no such evidence in a composite spectrum of matched-σ ETGs drawn from the SDSS. Such variation in the form of the IMF at low stellar masses (m ≲ 0.3 M ⊙), if present, could reconcile lensing/dynamical and spectroscopic IMF estimates for the SNELLS lenses and account for their lighter M */L relative to the mean matched-σ ETG. We provide the spectra used in this study to facilitate future comparisons.
Dose-volume histogram prediction using density estimation.
Skarpman Munter, Johanna; Sjölund, Jens
2015-09-07
Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Andrew B.; Smith, Russell J.; Conroy, Charlie
2017-08-20
We present new observations of the three nearest early-type galaxy (ETG) strong lenses discovered in the SINFONI Nearby Elliptical Lens Locator Survey (SNELLS). Based on their lensing masses, these ETGs were inferred to have a stellar initial mass function (IMF) consistent with that of the Milky Way, not the bottom-heavy IMF that has been reported as typical for high- σ ETGs based on lensing, dynamical, and stellar population synthesis techniques. We use these unique systems to test the consistency of IMF estimates derived from different methods. We first estimate the stellar M {sub *}/ L using lensing and stellar dynamics.more » We then fit high-quality optical spectra of the lenses using an updated version of the stellar population synthesis models developed by Conroy and van Dokkum. When examined individually, we find good agreement among these methods for one galaxy. The other two galaxies show 2–3 σ tension with lensing estimates, depending on the dark matter contribution, when considering IMFs that extend to 0.08 M {sub ⊙}. Allowing a variable low-mass cutoff or a nonparametric form of the IMF reduces the tension among the IMF estimates to <2 σ . There is moderate evidence for a reduced number of low-mass stars in the SNELLS spectra, but no such evidence in a composite spectrum of matched- σ ETGs drawn from the SDSS. Such variation in the form of the IMF at low stellar masses ( m ≲ 0.3 M {sub ⊙}), if present, could reconcile lensing/dynamical and spectroscopic IMF estimates for the SNELLS lenses and account for their lighter M {sub *}/ L relative to the mean matched- σ ETG. We provide the spectra used in this study to facilitate future comparisons.« less
Fernandes, Ricardo; Grootes, Pieter; Nadeau, Marie-Josée; Nehlich, Olaf
2015-07-14
The island cemetery site of Ostorf (Germany) consists of individual human graves containing Funnel Beaker ceramics dating to the Early or Middle Neolithic. However, previous isotope and radiocarbon analysis demonstrated that the Ostorf individuals had a diet rich in freshwater fish. The present study was undertaken to quantitatively reconstruct the diet of the Ostorf population and establish if dietary habits are consistent with the traditional characterization of a Neolithic diet. Quantitative diet reconstruction was achieved through a novel approach consisting of the use of the Bayesian mixing model Food Reconstruction Using Isotopic Transferred Signals (FRUITS) to model isotope measurements from multiple dietary proxies (δ 13 C collagen , δ 15 N collagen , δ 13 C bioapatite , δ 34 S methione , 14 C collagen ). The accuracy of model estimates was verified by comparing the agreement between observed and estimated human dietary radiocarbon reservoir effects. Quantitative diet reconstruction estimates confirm that the Ostorf individuals had a high protein intake due to the consumption of fish and terrestrial animal products. However, FRUITS estimates also show that plant foods represented a significant source of calories. Observed and estimated human dietary radiocarbon reservoir effects are in good agreement provided that the aquatic reservoir effect at Lake Ostorf is taken as reference. The Ostorf population apparently adopted elements associated with a Neolithic culture but adapted to available local food resources and implemented a subsistence strategy that involved a large proportion of fish and terrestrial meat consumption. This case study exemplifies the diversity of subsistence strategies followed during the Neolithic. Am J Phys Anthropol, 2015. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Consumer product chemical weight fractions from ingredient lists.
Isaacs, Kristin K; Phillips, Katherine A; Biryol, Derya; Dionisio, Kathie L; Price, Paul S
2018-05-01
Assessing human exposures to chemicals in consumer products requires composition information. However, comprehensive composition data for products in commerce are not generally available. Many consumer products have reported ingredient lists that are constructed using specific guidelines. A probabilistic model was developed to estimate quantitative weight fraction (WF) values that are consistent with the rank of an ingredient in the list, the number of reported ingredients, and labeling rules. The model provides the mean, median, and 95% upper and lower confidence limit WFs for ingredients of any rank in lists of any length. WFs predicted by the model compared favorably with those reported on Material Safety Data Sheets. Predictions for chemicals known to provide specific functions in products were also found to reasonably agree with reported WFs. The model was applied to a selection of publicly available ingredient lists, thereby estimating WFs for 1293 unique ingredients in 1123 products in 81 product categories. Predicted WFs, although less precise than reported values, can be estimated for large numbers of product-chemical combinations and thus provide a useful source of data for high-throughput or screening-level exposure assessments.
Capesius, Joseph P.; Arnold, L. Rick
2012-01-01
The Mass Balance results were quite variable over time such that they appeared suspect with respect to the concept of groundwater flow as being gradual and slow. The large degree of variability in the day-to-day and month-to-month Mass Balance results is likely the result of many factors. These factors could include ungaged stream inflows or outflows, short-term streamflow losses to and gains from temporary bank storage, and any lag in streamflow accounting owing to streamflow lag time of flow within a reach. The Pilot Point time series results were much less variable than the Mass Balance results and extreme values were effectively constrained. Less day-to-day variability, smaller magnitude extreme values, and smoother transitions in base-flow estimates provided by the Pilot Point method are more consistent with a conceptual model of groundwater flow being gradual and slow. The Pilot Point method provided a better fit to the conceptual model of groundwater flow and appeared to provide reasonable estimates of base flow.
Estimates of fertility in Bangladesh.
D'souza, S; Rahman, S
1978-01-01
The attempt is made to estimate fertility levels in Bangladesh on the basis of data collected during the 1974 Census. In the 1st section attention is directed to providing an overall picture of the demographic situation in the country. Comparisons between the 1961 and 1974 data demonstrates that the 1974 Census data provide consistent results. Factors such as the degree of urbanization, literacy and economic participation rates--considered as indicators of development--all seem to show little progress during the intercensal period. The use of child/women ratios (CWRs) provides plausible evidence of the likelihood of a fertility decline. A decline in CWR values is small for "all areas" but a marked decline can be noted for "urban areas." The recorded mean number of children is less in 1974 than in 1961 for women under age 35 whereas for the older groups the 1974 Census shows higher mean numbers. The Bangladesh Fertility Survey (BFS) data result for the total fertility rate of 6.58 is very close to that estimated for the 1974 Census--6.59. The reverse survival method also indicates that birthrates have been lower during the 1969-1974 period.
Detecting Emotional Expression in Face-to-Face and Online Breast Cancer Support Groups
ERIC Educational Resources Information Center
Liess, Anna; Simon, Wendy; Yutsis, Maya; Owen, Jason E.; Piemme, Karen Altree; Golant, Mitch; Giese-Davis, Janine
2008-01-01
Accurately detecting emotional expression in women with primary breast cancer participating in support groups may be important for therapists and researchers. In 2 small studies (N = 20 and N = 16), the authors examined whether video coding, human text coding, and automated text analysis provided consistent estimates of the level of emotional…
ERIC Educational Resources Information Center
Laux, John M.; Perera-Diltz, Dilani; Smirnoff, Jennifer B.; Salyers, Kathleen M.
2005-01-01
The authors investigated the psychometric capabilities of the Face Valid Other Drugs (FVOD) scale of the Substance Abuse Subtle Screening Inventory-3 (SASSI-3; G. A. Miller, 1999). Internal consistency reliability estimates and construct validity factor analysis for 230 college students provided initial support for the psychometric properties of…
Wicke, Jason; Dumas, Genevieve A; Costigan, Patrick A
2009-01-05
Modeling of the body segments to estimate segment inertial parameters is required in the kinetic analysis of human motion. A new geometric model for the trunk has been developed that uses various cross-sectional shapes to estimate segment volume and adopts a non-uniform density function that is gender-specific. The goal of this study was to test the accuracy of the new model for estimating the trunk's inertial parameters by comparing it to the more current models used in biomechanical research. Trunk inertial parameters estimated from dual X-ray absorptiometry (DXA) were used as the standard. Twenty-five female and 24 male college-aged participants were recruited for the study. Comparisons of the new model to the accepted models were accomplished by determining the error between the models' trunk inertial estimates and that from DXA. Results showed that the new model was more accurate across all inertial estimates than the other models. The new model had errors within 6.0% for both genders, whereas the other models had higher average errors ranging from 10% to over 50% and were much more inconsistent between the genders. In addition, there was little consistency in the level of accuracy for the other models when estimating the different inertial parameters. These results suggest that the new model provides more accurate and consistent trunk inertial estimates than the other models for both female and male college-aged individuals. However, similar studies need to be performed using other populations, such as elderly or individuals from a distinct morphology (e.g. obese). In addition, the effect of using different models on the outcome of kinetic parameters, such as joint moments and forces needs to be assessed.
2013-10-04
The World Health Organization (WHO)-coordinated Global Invasive Bacterial Vaccine-Preventable Diseases (IB-VPD) sentinel hospital surveillance network provides data for decision making regarding use of pneumococcal conjugate vaccine and Haemophilus influenzae type b (Hib) vaccine, both recommended for inclusion in routine childhood immunization programs worldwide. WHO recommends that countries conduct sentinel hospital surveillance for meningitis among children aged <5 years, including collection of cerebrospinal fluid (CSF) for laboratory detection of bacterial etiologies. Surveillance for pneumonia and sepsis are recommended at selected hospitals with well-functioning laboratories where meningitis surveillance consistently meets process indicators (e.g., surveillance performance indicators). To use sentinel hospital surveillance for meningitis to estimate meningitis hospitalization rates, WHO developed a rapid method to estimate the number of children at-risk for meningitis in a sentinel hospital catchment area. Monitoring changes in denominators over time using consistent methods is essential for interpreting changes in sentinel surveillance incidence data and for assessing the effect of vaccine introduction on disease epidemiology. This report describes the method and its use in The Gambia and Senegal.
Petkewich, Matthew D.; Conrads, Paul
2013-01-01
The Everglades Depth Estimation Network is an integrated network of real-time water-level gaging stations, a ground-elevation model, and a water-surface elevation model designed to provide scientists, engineers, and water-resource managers with water-level and water-depth information (1991-2013) for the entire freshwater portion of the Greater Everglades. The U.S. Geological Survey Greater Everglades Priority Ecosystems Science provides support for the Everglades Depth Estimation Network in order for the Network to provide quality-assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. In a previous study, water-level estimation equations were developed to fill in missing data to increase the accuracy of the daily water-surface elevation model. During this study, those equations were updated because of the addition and removal of water-level gaging stations, the consistent use of water-level data relative to the North American Vertical Datum of 1988, and availability of recent data (March 1, 2006, to September 30, 2011). Up to three linear regression equations were developed for each station by using three different input stations to minimize the occurrences of missing data for an input station. Of the 667 water-level estimation equations developed to fill missing data at 223 stations, more than 72 percent of the equations have coefficients of determination greater than 0.90, and 97 percent have coefficients of determination greater than 0.70.
NASA Astrophysics Data System (ADS)
Geiger, Tobias
2018-04-01
Gross domestic product (GDP) represents a widely used metric to compare economic development across time and space. GDP estimates have been routinely assembled only since the beginning of the second half of the 20th century, making comparisons with prior periods cumbersome or even impossible. In recent years various efforts have been put forward to re-estimate national GDP for specific years in the past centuries and even millennia, providing new insights into past economic development on a snapshot basis. In order to make this wealth of data utilizable across research disciplines, we here present a first continuous and consistent data set of GDP time series for 195 countries from 1850 to 2009, based mainly on data from the Maddison Project and other population and GDP sources. The GDP data are consistent with Penn World Tables v8.1 and future GDP projections from the Shared Socio-economic Pathways (SSPs), and are freely available at http://doi.org/10.5880/pik.2018.010 (Geiger and Frieler, 2018). To ease usability, we additionally provide GDP per capita data and further supplementary and data description files in the online archive. We utilize various methods to handle missing data and discuss the advantages and limitations of our methodology. Despite known shortcomings this data set provides valuable input, e.g., for climate impact research, in order to consistently analyze economic impacts from pre-industrial times to the future.
Riley, William; Briggs, Jill; McCullough, Mac
2011-01-01
This study presents a model for determining total funding needed for individual local health departments. The aim is to determine the financial resources needed to provide services for statewide local public health departments in Minnesota based on a gaps analysis done to estimate the funding needs. We used a multimethod analysis consisting of 3 approaches to estimate gaps in local public health funding consisting of (1) interviews of selected local public health leaders, (2) a Delphi panel, and (3) a Nominal Group Technique. On the basis of these 3 approaches, a consensus estimate of funding gaps was generated for statewide projections. The study includes an analysis of cost, performance, and outcomes from 2005 to 2007 for all 87 local governmental health departments in Minnesota. For each of the methods, we selected a panel to represent a profile of Minnesota health departments. The 2 main outcome measures were local-level gaps in financial resources and total resources needed to provide public health services at the local level. The total public health expenditure in Minnesota for local governmental public health departments was $302 million in 2007 ($58.92 per person). The consensus estimate of the financial gaps in local public health departments indicates that an additional $32.5 million (a 10.7% increase or $6.32 per person) is needed to adequately serve public health needs in the local communities. It is possible to make informed estimates of funding gaps for public health activities on the basis of a combination of quantitative methods. There is a wide variation in public health expenditure at the local levels, and methods are needed to establish minimum baseline expenditure levels to adequately treat a population. The gaps analysis can be used by stakeholders to inform policy makers of the need for improved funding of the public health system.
Precision estimate for Odin-OSIRIS limb scatter retrievals
NASA Astrophysics Data System (ADS)
Bourassa, A. E.; McLinden, C. A.; Bathgate, A. F.; Elash, B. J.; Degenstein, D. A.
2012-02-01
The limb scatter measurements made by the Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on the Odin spacecraft are used to routinely produce vertically resolved trace gas and aerosol extinction profiles. Version 5 of the ozone and stratospheric aerosol extinction retrievals, which are available for download, are performed using a multiplicative algebraic reconstruction technique (MART). The MART inversion is a type of relaxation method, and as such the covariance of the retrieved state is estimated numerically, which, if done directly, is a computationally heavy task. Here we provide a methodology for the derivation of a numerical estimate of the covariance matrix for the retrieved state using the MART inversion that is sufficiently efficient to perform for each OSIRIS measurement. The resulting precision is compared with the variability in a large set of pairs of OSIRIS measurements that are close in time and space in the tropical stratosphere where the natural atmospheric variability is weak. These results are found to be highly consistent and thus provide confidence in the numerical estimate of the precision in the retrieved profiles.
NASA Technical Reports Server (NTRS)
McCurry, J. B.
1995-01-01
The purpose of the TA-2 contract was to provide advanced launch vehicle concept definition and analysis to assist NASA in the identification of future launch vehicle requirements. Contracted analysis activities included vehicle sizing and performance analysis, subsystem concept definition, propulsion subsystem definition (foreign and domestic), ground operations and facilities analysis, and life cycle cost estimation. The basic period of performance of the TA-2 contract was from May 1992 through May 1993. No-cost extensions were exercised on the contract from June 1993 through July 1995. This document is part of the final report for the TA-2 contract. The final report consists of three volumes: Volume 1 is the Executive Summary, Volume 2 is Technical Results, and Volume 3 is Program Cost Estimates. The document-at-hand, Volume 3, provides a work breakdown structure dictionary, user's guide for the parametric life cycle cost estimation tool, and final report developed by ECON, Inc., under subcontract to Lockheed Martin on TA-2 for the analysis of heavy lift launch vehicle concepts.
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1990-01-01
The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.
Psychological impact of providing women with personalised 10-year breast cancer risk estimates.
French, David P; Southworth, Jake; Howell, Anthony; Harvie, Michelle; Stavrinos, Paula; Watterson, Donna; Sampson, Sarah; Evans, D Gareth; Donnelly, Louise S
2018-05-08
The Predicting Risk of Cancer at Screening (PROCAS) study estimated 10-year breast cancer risk for 53,596 women attending NHS Breast Screening Programme. The present study, nested within the PROCAS study, aimed to assess the psychological impact of receiving breast cancer risk estimates, based on: (a) the Tyrer-Cuzick (T-C) algorithm including breast density or (b) T-C including breast density plus single-nucleotide polymorphisms (SNPs), versus (c) comparison women awaiting results. A sample of 2138 women from the PROCAS study was stratified by testing groups: T-C only, T-C(+SNPs) and comparison women; and by 10-year risk estimates received: 'moderate' (5-7.99%), 'average' (2-4.99%) or 'below average' (<1.99%) risk. Postal questionnaires were returned by 765 (36%) women. Overall state anxiety and cancer worry were low, and similar for women in T-C only and T-C(+SNPs) groups. Women in both T-C only and T-C(+SNPs) groups showed lower-state anxiety but slightly higher cancer worry than comparison women awaiting results. Risk information had no consistent effects on intentions to change behaviour. Most women were satisfied with information provided. There was considerable variation in understanding. No major harms of providing women with 10-year breast cancer risk estimates were detected. Research to establish the feasibility of risk-stratified breast screening is warranted.
Discovering graphical Granger causality using the truncating lasso penalty
Shojaie, Ali; Michailidis, George
2010-01-01
Motivation: Components of biological systems interact with each other in order to carry out vital cell functions. Such information can be used to improve estimation and inference, and to obtain better insights into the underlying cellular mechanisms. Discovering regulatory interactions among genes is therefore an important problem in systems biology. Whole-genome expression data over time provides an opportunity to determine how the expression levels of genes are affected by changes in transcription levels of other genes, and can therefore be used to discover regulatory interactions among genes. Results: In this article, we propose a novel penalization method, called truncating lasso, for estimation of causal relationships from time-course gene expression data. The proposed penalty can correctly determine the order of the underlying time series, and improves the performance of the lasso-type estimators. Moreover, the resulting estimate provides information on the time lag between activation of transcription factors and their effects on regulated genes. We provide an efficient algorithm for estimation of model parameters, and show that the proposed method can consistently discover causal relationships in the large p, small n setting. The performance of the proposed model is evaluated favorably in simulated, as well as real, data examples. Availability: The proposed truncating lasso method is implemented in the R-package ‘grangerTlasso’ and is freely available at http://www.stat.lsa.umich.edu/∼shojaie/ Contact: shojaie@umich.edu PMID:20823316
Systemic autoimmune rheumatic disease prevalence in Canada: updated analyses across 7 provinces.
Broten, Laurel; Aviña-Zubieta, J Antonio; Lacaille, Diane; Joseph, Lawrence; Hanly, John G; Lix, Lisa; O'Donnell, Siobhan; Barnabe, Cheryl; Fortin, Paul R; Hudson, Marie; Jean, Sonia; Peschken, Christine; Edworthy, Steven M; Svenson, Larry; Pineau, Christian A; Clarke, Ann E; Smith, Mark; Bélisle, Patrick; Badley, Elizabeth M; Bergeron, Louise; Bernatsky, Sasha
2014-04-01
To estimate systemic autoimmune rheumatic disease (SARD) prevalence across 7 Canadian provinces using population-based administrative data evaluating both regional variations and the effects of age and sex. Using provincial physician billing and hospitalization data, cases of SARD (systemic lupus erythematosus, scleroderma, primary Sjögren syndrome, polymyositis/dermatomyositis) were ascertained. Three case definitions (rheumatology billing, 2-code physician billing, and hospital diagnosis) were combined to derive a SARD prevalence estimate for each province, categorized by age, sex, and rural/urban status. A hierarchical Bayesian latent class regression model was fit to account for the imperfect sensitivity and specificity of each case definition. The model also provided sensitivity estimates of different case definition approaches. Prevalence estimates for overall SARD ranged between 2 and 5 cases per 1000 residents across provinces. Similar demographic trends were evident across provinces, with greater prevalence in women and in persons over 45 years old. SARD prevalence in women over 45 was close to 1%. Overall sensitivity was poor, but estimates for each of the 3 case definitions improved within older populations and were slightly higher for men compared to women. Our results are consistent with previous estimates and other North American findings, and provide results from coast to coast, as well as useful information about the degree of regional and demographic variations that can be seen within a single country. Our work demonstrates the usefulness of using multiple data sources, adjusting for the error in each, and providing estimates of the sensitivity of different case definition approaches.
Real-time data for estimating a forward-looking interest rate rule of the ECB.
Bletzinger, Tilman; Wieland, Volker
2017-12-01
The purpose of the data presented in this article is to use it in ex post estimations of interest rate decisions by the European Central Bank (ECB), as it is done by Bletzinger and Wieland (2017) [1]. The data is of quarterly frequency from 1999 Q1 until 2013 Q2 and consists of the ECB's policy rate, inflation rate, real output growth and potential output growth in the euro area. To account for forward-looking decision making in the interest rate rule, the data consists of expectations about future inflation and output dynamics. While potential output is constructed based on data from the European Commission's annual macro-economic database, inflation and real output growth are taken from two different sources both provided by the ECB: the Survey of Professional Forecasters and projections made by ECB staff. Careful attention was given to the publication date of the collected data to ensure a real-time dataset only consisting of information which was available to the decision makers at the time of the decision.
Measuring the Reliability of Picture Story Exercises like the TAT
Gruber, Nicole; Kreuzpointner, Ludwig
2013-01-01
As frequently reported, psychometric assessments on Picture Story Exercises, especially variations of the Thematic Apperception Test, mostly reveal inadequate scores for internal consistency. We demonstrate that the reason for this apparent shortcoming is not caused by the coding system itself but from the incorrect use of internal consistency coefficients, especially Cronbach’s α. This problem could be eliminated by using the category-scores as items instead of the picture-scores. In addition to a theoretical explanation we prove mathematically why the use of category-scores produces an adequate internal consistency estimation and examine our idea empirically with the origin data set of the Thematic Apperception Test by Heckhausen and two additional data sets. We found generally higher values when using the category-scores as items instead of picture-scores. From an empirical and theoretical point of view, the estimated reliability is also superior to each category within a picture as item measuring. When comparing our suggestion with a multifaceted Rasch-model we provide evidence that our procedure better fits the underlying principles of PSE. PMID:24348902
Toward link predictability of complex networks
Lü, Linyuan; Pan, Liming; Zhou, Tao; Zhang, Yi-Cheng; Stanley, H. Eugene
2015-01-01
The organization of real networks usually embodies both regularities and irregularities, and, in principle, the former can be modeled. The extent to which the formation of a network can be explained coincides with our ability to predict missing links. To understand network organization, we should be able to estimate link predictability. We assume that the regularity of a network is reflected in the consistency of structural features before and after a random removal of a small set of links. Based on the perturbation of the adjacency matrix, we propose a universal structural consistency index that is free of prior knowledge of network organization. Extensive experiments on disparate real-world networks demonstrate that (i) structural consistency is a good estimation of link predictability and (ii) a derivative algorithm outperforms state-of-the-art link prediction methods in both accuracy and robustness. This analysis has further applications in evaluating link prediction algorithms and monitoring sudden changes in evolving network mechanisms. It will provide unique fundamental insights into the above-mentioned academic research fields, and will foster the development of advanced information filtering technologies of interest to information technology practitioners. PMID:25659742
Influence of sectioning location on age estimates from common carp dorsal spines
Watkins, Carson J.; Klein, Zachary B.; Terrazas, Marc M.; Quist, Michael C.
2015-01-01
Dorsal spines have been shown to provide precise age estimates for Common CarpCyprinus carpio and are commonly used by management agencies to gain information on Common Carp populations. However, no previous studies have evaluated variation in the precision of age estimates obtained from different sectioning locations along Common Carp dorsal spines. We evaluated the precision, relative readability, and distribution of age estimates obtained from various sectioning locations along Common Carp dorsal spines. Dorsal spines from 192 Common Carp were sectioned at the base (section 1), immediately distal to the basal section (section 2), and at 25% (section 3), 50% (section 4), and 75% (section 5) of the total length of the dorsal spine. The exact agreement and within-1-year agreement among readers was highest and the coefficient of variation lowest for section 2. In general, age estimates derived from sections 2 and 3 had similar age distributions and displayed the highest concordance in age estimates with section 1. Our results indicate that sections taken at ≤ 25% of the total length of the dorsal spine can be easily interpreted and provide precise estimates of Common Carp age. The greater consistency in age estimates obtained from section 2 indicates that by using a standard sectioning location, fisheries scientists can expect age-based estimates of population metrics to be more comparable and thus more useful for understanding Common Carp population dynamics.
Mathematical methods in biological dosimetry: the 1996 Iranian accident.
Voisin, P; Assaei, R G; Heidary, A; Varzegar, R; Zakeri, F; Durand, V; Sorokine-Durm, I
2000-11-01
To report 18 months of cytogenetic follow-up for an Iranian worker accidentally overexposed to 192Ir, the mathematical extrapolation and comparison with clinical data. Unstable chromosome aberrations were measured using conventional cytogenetic tests by French and Iranian biological dosimetry laboratories on five occasions after the exposure. The decrease in dicentrics over time was analysed mathematically. In addition, Dolphin and Qdr extrapolations were applied to the data to check the exposure estimates. FISH determination of translocation yields was performed twice by the French laboratory and the results compared with the Dolphin and Qdr corrected values. Dose estimates based on dicentrics decreased from 3.1 +/- 0.4 Gy at 5 days after the accident to 0.8 +/- 0.2 Gy at 529 days. This could be fitted by double-exponential regression with an inflexion point between rapid and slow decrease of dicentrics after about 40 days. Dose estimates of 3.4 +/- 0.4 Gy for the Qdr model and 3.6 +/- 0.5 Gy for the Dolphin model were calculated during the post-exposure period and were remarkably stable. FISH translocation data at 26 and 61 days appeared consistent with the Dolphin and Qdr estimates. Dose correction by the Qdr and Dolphin models and translocation scoring appeared consistent with the clinical data and provided better information about the radiation injury than did crude estimates from dicentric scoring alone. Estimation by the Dolphin model of the irradiated fraction of the body seemed unreliable: it correlated better with the fraction of originally irradiated lymphocytes.
Investigating the detection of multi-homed devices independent of operating systems
2017-09-01
timestamp data was used to estimate clock skews using linear regression and linear optimization methods. Analysis revealed that detection depends on...the consistency of the estimated clock skew. Through vertical testing, it was also shown that clock skew consistency depends on the installed...optimization methods. Analysis revealed that detection depends on the consistency of the estimated clock skew. Through vertical testing, it was also
Inferring Lower Boundary Driving Conditions Using Vector Magnetic Field Observations
NASA Technical Reports Server (NTRS)
Schuck, Peter W.; Linton, Mark; Leake, James; MacNeice, Peter; Allred, Joel
2012-01-01
Low-beta coronal MHD simulations of realistic CME events require the detailed specification of the magnetic fields, velocities, densities, temperatures, etc., in the low corona. Presently, the most accurate estimates of solar vector magnetic fields are made in the high-beta photosphere. Several techniques have been developed that provide accurate estimates of the associated photospheric plasma velocities such as the Differential Affine Velocity Estimator for Vector Magnetograms and the Poloidal/Toroidal Decomposition. Nominally, these velocities are consistent with the evolution of the radial magnetic field. To evolve the tangential magnetic field radial gradients must be specified. In addition to estimating the photospheric vector magnetic and velocity fields, a further challenge involves incorporating these fields into an MHD simulation. The simulation boundary must be driven, consistent with the numerical boundary equations, with the goal of accurately reproducing the observed magnetic fields and estimated velocities at some height within the simulation. Even if this goal is achieved, many unanswered questions remain. How can the photospheric magnetic fields and velocities be propagated to the low corona through the transition region? At what cadence must we observe the photosphere to realistically simulate the corona? How do we model the magnetic fields and plasma velocities in the quiet Sun? How sensitive are the solutions to other unknowns that must be specified, such as the global solar magnetic field, and the photospheric temperature and density?
Estimating age of sea otters with cementum layers in the first premolar
Bodkin, James L.; Ames, J.A.; Jameson, R.J.; Johnson, A.M.; Matson, G.M.
1997-01-01
We assessed sources of variation in the use of tooth cementum layers to determine age by comparing counts in premolar tooth sections to known ages of 20 sea otters (Enhydra lutris). Three readers examined each sample 3 times, and the 3 readings of each sample were averaged by reader to provide the mean estimated age. The mean (SE) of known age sample was 5.2 years (1.0) and the 3 mean estimated ages were 7.0 (1.0), 5.9 (1.1) and, 4.4 (0.8). The proportion of estimates accurate to within +/- 1 year were 0.25, 0.55, and 0.65 and to within +/- 2 years 0.65, 0.80, and 0.70, by reader. The proportions of samples estimated with >3 years error were 0.20, 0.10, and 0.05. Errors as large as 7, 6, and 5 years were made among readers. In few instances did all readers uniformly provide either accurate (error 1 yr) counts. In most cases (0.85), 1 or 2 of the readers provided accurate counts. Coefficients of determination (R2) between known ages and mean estimated ages were 0.81, 0.87, and 0.87, by reader. The results of this study suggest that cementum layers within sea otter premolar teeth likely are deposited annually and can be used for age estimation. However, criteria used in interpreting layers apparently varied by reader, occasionally resulting in large errors, which were not consistent among readers. While large errors were evident for some individual otters, there were no differences between the known and estimated age-class distribution generated by each reader. Until accuracy can be improved, application of this ageing technique should be limited to sample sizes of at least 6-7 individuals within age classes of >/=1 year.
Baker, David R; Barron, Leon; Kasprzyk-Hordern, Barbara
2014-07-15
This paper presents, for the first time, community-wide estimation of drug and pharmaceuticals consumption in England using wastewater analysis and a large number of compounds. Among groups of compounds studied were: stimulants, hallucinogens and their metabolites, opioids, morphine derivatives, benzodiazepines, antidepressants and others. Obtained results showed the usefulness of wastewater analysis in order to provide estimates of local community drug consumption. It is noticeable that where target compounds could be compared to NHS prescription statistics, good comparisons were apparent between the two sets of data. These compounds include oxycodone, dihydrocodeine, methadone, tramadol, temazepam and diazepam. Whereas, discrepancies were observed for propoxyphene, codeine, dosulepin and venlafaxine (over-estimations in each case except codeine). Potential reasons for discrepancies include: sales of drugs sold without prescription and not included within NHS data, abuse of a drug with the compound trafficked through illegal sources, different consumption patterns in different areas, direct disposal leading to over estimations when using parent compound as the drug target residue and excretion factors not being representative of the local community. It is noticeable that using a metabolite (and not a parent drug) as a biomarker leads to higher certainty of obtained estimates. With regard to illicit drugs, consistent and logical results were reported. Monitoring of these compounds over a one week period highlighted the expected recreational use of many of these drugs (e.g. cocaine and MDMA) and the more consistent use of others (e.g. methadone). Copyright © 2014 Elsevier B.V. All rights reserved.
Center of Mass Estimation for a Spinning Spacecraft Using Doppler Shift of the GPS Carrier Frequency
NASA Technical Reports Server (NTRS)
Sedlak, Joseph E.
2016-01-01
A sequential filter is presented for estimating the center of mass (CM) of a spinning spacecraft using Doppler shift data from a set of onboard Global Positioning System (GPS) receivers. The advantage of the proposed method is that it is passive and can be run continuously in the background without using commanded thruster firings to excite spacecraft dynamical motion for observability. The NASA Magnetospheric Multiscale (MMS) mission is used as a test case for the CM estimator. The four MMS spacecraft carry star cameras for accurate attitude and spin rate estimation. The angle between the spacecraft nominal spin axis (for MMS this is the geometric body Z-axis) and the major principal axis of inertia is called the coning angle. The transverse components of the estimated rate provide a direct measure of the coning angle. The coning angle has been seen to shift slightly after every orbit and attitude maneuver. This change is attributed to a small asymmetry in the fuel distribution that changes with each burn. This paper shows a correlation between the apparent mass asymmetry deduced from the variations in the coning angle and the CM estimates made using the GPS Doppler data. The consistency between the changes in the coning angle and the CM provides validation of the proposed GPS Doppler method for estimation of the CM on spinning spacecraft.
Statistical processing of large image sequences.
Khellah, F; Fieguth, P; Murray, M J; Allen, M
2005-01-01
The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.
Value of neonicotinoid seed treatments to US soybean farmers.
Hurley, Terrance; Mitchell, Paul
2017-01-01
The benefits of neonicotinoid seed treatment to soybean farmers have received increased scrutiny. Rather than use data from small-plot experiments, this research uses survey data from 500 US farmers to estimate the benefit of neonicotinoid seed treatments to them. As seed treatment users, farmers are familiar with their benefits in the field and have economic incentives to only use them if they provide value. Of the surveyed farmers, 51% used insecticide seed treatments, averaging 87% of their soybean area. Farmers indicated that human and environmental safety is an important consideration affecting their pest management decisions and reported aphids as the most managed and important soybean pest. Asking farmers who used seed treatments to state how much value they provided gives an estimate of $US 28.04 ha -1 treated in 2013, net of seed treatment costs. Farmer-reported average yields provided an estimated average yield gain of 128.0 kg ha -1 treated in 2013, or about $US 42.20 ha -1 treated, net of seed treatment costs. These estimates using different data and methods are consistent and suggest the value of insecticide seed treatments to the US soybean farmers who used them in 2013 was around $US 28-42 ha -1 treated, net of seed treatment costs. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Kappa statistic for clustered matched-pair data.
Yang, Zhao; Zhou, Ming
2014-07-10
Kappa statistic is widely used to assess the agreement between two procedures in the independent matched-pair data. For matched-pair data collected in clusters, on the basis of the delta method and sampling techniques, we propose a nonparametric variance estimator for the kappa statistic without within-cluster correlation structure or distributional assumptions. The results of an extensive Monte Carlo simulation study demonstrate that the proposed kappa statistic provides consistent estimation and the proposed variance estimator behaves reasonably well for at least a moderately large number of clusters (e.g., K ≥50). Compared with the variance estimator ignoring dependence within a cluster, the proposed variance estimator performs better in maintaining the nominal coverage probability when the intra-cluster correlation is fair (ρ ≥0.3), with more pronounced improvement when ρ is further increased. To illustrate the practical application of the proposed estimator, we analyze two real data examples of clustered matched-pair data. Copyright © 2014 John Wiley & Sons, Ltd.
Robust Methods for Moderation Analysis with a Two-Level Regression Model.
Yang, Miao; Yuan, Ke-Hai
2016-01-01
Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.
Sentinel 2 products and data quality status
NASA Astrophysics Data System (ADS)
Clerc, Sebastien; Gascon, Ferran; Bouzinac, Catherine; Touli-Lebreton, Dimitra; Francesconi, Benjamin; Lafrance, Bruno; Louis, Jerome; Alhammoud, Bahjat; Massera, Stephane; Pflug, Bringfried; Viallefont, Francoise; Pessiot, Laetitia
2017-04-01
Since July 2015, Sentinel-2A provides high-quality multi-spectral images with 10 m spatial resolution. With the launch of Sentinel-2B scheduled for early March 2017, the mission will create a consistent time series with a revisit time of 5 days. The consistency of the time series is ensured by some specific performance requirements such as multi-temporal spatial co-registration and radiometric stability, routinely monitored by the Sentinel-2 Mission Performance Centre (S2MPC). The products also provide a rich set of metadata and auxiliary data to support higher-level processing. This presentation will focus on the current status of the Sentinel-2 L1C and L2A products, including dissemination and product format aspects. Up-to-date mission performance estimations will be presented. Finally we will provide an outlook on the future evolutions: commissioning tasks for Sentinel-2B, geometric refinement, product format and processing improvements.
Huang, Jian; Zhang, Cun-Hui
2013-01-01
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming
2011-01-01
This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650
Physics of ultra-high bioproductivity in algal photobioreactors
NASA Astrophysics Data System (ADS)
Greenwald, Efrat; Gordon, Jeffrey M.; Zarmi, Yair
2012-04-01
Cultivating algae at high densities in thin photobioreactors engenders time scales for random cell motion that approach photosynthetic rate-limiting time scales. This synchronization allows bioproductivity above that achieved with conventional strategies. We show that a diffusion model for cell motion (1) accounts for high bioproductivity at irradiance values previously deemed restricted by photoinhibition, (2) predicts the existence of optimal culture densities and their dependence on irradiance, consistent with available data, (3) accounts for the observed degree to which mixing improves bioproductivity, and (4) provides an estimate of effective cell diffusion coefficients, in accord with independent hydrodynamic estimates.
Digital detection and processing of laser beacon signals for aircraft collision hazard warning
NASA Technical Reports Server (NTRS)
Sweet, L. M.; Miles, R. B.; Russell, G. F.; Tomeh, M. G.; Webb, S. G.; Wong, E. Y.
1981-01-01
A low-cost collision hazard warning system suitable for implementation in both general and commercial aviation is presented. Laser beacon systems are used as sources of accurate relative position information that are not dependent on communication between aircraft or with the ground. The beacon system consists of a rotating low-power laser beacon, detector arrays with special optics for wide angle acceptance and filtering of solar background light, microprocessors for proximity and relative trajectory computation, and pilot displays of potential hazards. The laser beacon system provides direct measurements of relative aircraft positions; using optimal nonlinear estimation theory, the measurements resulting from the current beacon sweep are combined with previous data to provide the best estimate of aircraft proximity, heading, minimium passing distance, and time to closest approach.
Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud
2017-01-01
In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.
Detailed gravity anomalies from GEOS-3 satellite altimetry data
NASA Technical Reports Server (NTRS)
Gopalapillai, G. S.; Mourad, A. G.
1978-01-01
A technique for deriving mean gravity anomalies from dense altimetry data was developed. A combination of both deterministic and statistical techniques was used. The basic mathematical model was based on the Stokes' equation which describes the analytical relationship between mean gravity anomalies and geoid undulations at a point; this undulation is a linear function of the altimetry data at that point. The overdetermined problem resulting from the excessive altimetry data available was solved using Least-Squares principles. These principles enable the simultaneous estimation of the associated standard deviations reflecting the internal consistency based on the accuracy estimates provided for the altimetry data as well as for the terrestrial anomaly data. Several test computations were made of the anomalies and their accuracy estimates using GOES-3 data.
Pan-European household and industrial water demand: regional relevant estimations
NASA Astrophysics Data System (ADS)
Bernhard, Jeroen; Reynaud, Arnaud; de Roo, Ad
2016-04-01
Sustainable water management is of high importance to provide adequate quality and quantity of water to European households, industries and agriculture. Especially since demographic, economic and climate changes are expected to increase competition for water between these sectors in the future. A shortage of water implies a reduction in welfare of households or damage to economic sectors. This socio-economic component should be incorporated into the decision-making process when developing water allocation schemes, requiring detailed water use information and cost/benefit functions. We now present the results of our study which is focused at providing regionally relevant pan-European water demand and cost-benefit estimations for the household and industry sector. We gathered consistent data on water consumption, water prices and other relevant variables at the highest spatial detail available from national statistical offices and other organizational bodies. This database provides the most detailed up to date picture of present water use and water prices across Europe. The use of homogeneous data allowed us to compare regions and analyze spatial patterns. We applied econometric methods to determine the main determinants of water demand and make a monetary valuation of water for both the domestic and industry sector. This monetary valuation is important to allow water allocation based on economic damage estimates. We also attempted to estimate how population growth, as well as socio-economic and climatic changes impact future water demand up to 2050 using a homogeneous method for all countries. European projections for the identified major drivers of water demand were used to simulate future conditions. Subsequently, water demand functions were applied to estimate future water use and potential economic damage caused by water shortages. We present our results while also providing some estimation of the uncertainty of our predictions.
Epipolar Consistency in Transmission Imaging.
Aichert, André; Berger, Martin; Wang, Jian; Maass, Nicole; Doerfler, Arnd; Hornegger, Joachim; Maier, Andreas K
2015-11-01
This paper presents the derivation of the Epipolar Consistency Conditions (ECC) between two X-ray images from the Beer-Lambert law of X-ray attenuation and the Epipolar Geometry of two pinhole cameras, using Grangeat's theorem. We motivate the use of Oriented Projective Geometry to express redundant line integrals in projection images and define a consistency metric, which can be used, for instance, to estimate patient motion directly from a set of X-ray images. We describe in detail the mathematical tools to implement an algorithm to compute the Epipolar Consistency Metric and investigate its properties with detailed random studies on both artificial and real FD-CT data. A set of six reference projections of the CT scan of a fish were used to evaluate accuracy and precision of compensating for random disturbances of the ground truth projection matrix using an optimization of the consistency metric. In addition, we use three X-ray images of a pumpkin to prove applicability to real data. We conclude, that the metric might have potential in applications related to the estimation of projection geometry. By expression of redundancy between two arbitrary projection views, we in fact support any device or acquisition trajectory which uses a cone-beam geometry. We discuss certain geometric situations, where the ECC provide the ability to correct 3D motion, without the need for 3D reconstruction.
Paynter, Ian; Genest, Daniel; Peri, Francesco; Schaaf, Crystal
2018-04-06
Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results.
Bounding uncertainty in volumetric geometric models for terrestrial lidar observations of ecosystems
Genest, Daniel; Peri, Francesco; Schaaf, Crystal
2018-01-01
Volumetric models with known biases are shown to provide bounds for the uncertainty in estimations of volume for ecologically interesting objects, observed with a terrestrial laser scanner (TLS) instrument. Bounding cuboids, three-dimensional convex hull polygons, voxels, the Outer Hull Model and Square Based Columns (SBCs) are considered for their ability to estimate the volume of temperate and tropical trees, as well as geomorphological features such as bluffs and saltmarsh creeks. For temperate trees, supplementary geometric models are evaluated for their ability to bound the uncertainty in cylinder-based reconstructions, finding that coarser volumetric methods do not currently constrain volume meaningfully, but may be helpful with further refinement, or in hybridized models. Three-dimensional convex hull polygons consistently overestimate object volume, and SBCs consistently underestimate volume. Voxel estimations vary in their bias, due to the point density of the TLS data, and occlusion, particularly in trees. The response of the models to parametrization is analysed, observing unexpected trends in the SBC estimates for the drumlin dataset. Establishing that this result is due to the resolution of the TLS observations being insufficient to support the resolution of the geometric model, it is suggested that geometric models with predictable outcomes can also highlight data quality issues when they produce illogical results. PMID:29503722
Boutin, Claude; Geindreau, Christian
2010-09-01
This paper presents a study of transport parameters (diffusion, dynamic permeability, thermal permeability, trapping constant) of porous media by combining the homogenization of periodic media (HPM) and the self-consistent scheme (SCM) based on a bicomposite spherical pattern. The link between the HPM and SCM approaches is first established by using a systematic argument independent of the problem under consideration. It is shown that the periodicity condition can be replaced by zero flux and energy through the whole surface of the representative elementary volume. Consequently the SCM solution can be considered as a geometrical approximation of the local problem derived through HPM for materials such that the morphology of the period is "close" to the SCM pattern. These results are then applied to derive the estimates of the effective diffusion, the dynamic permeability, the thermal permeability and the trapping constant of porous media. These SCM estimates are compared with numerical HPM results obtained on periodic arrays of spheres and polyhedrons. It is shown that SCM estimates provide good analytical approximations of the effective parameters for periodic packings of spheres at porosities larger than 0.6, while the agreement is excellent for periodic packings of polyhedrons in the whole range of porosity.
Xian, George Z.; Homer, Collin G.; Rigge, Matthew B.; Shi, Hua; Meyer, Debbie
2015-01-01
Accurate and consistent estimates of shrubland ecosystem components are crucial to a better understanding of ecosystem conditions in arid and semiarid lands. An innovative approach was developed by integrating multiple sources of information to quantify shrubland components as continuous field products within the National Land Cover Database (NLCD). The approach consists of several procedures including field sample collections, high-resolution mapping of shrubland components using WorldView-2 imagery and regression tree models, Landsat 8 radiometric balancing and phenological mosaicking, medium resolution estimates of shrubland components following different climate zones using Landsat 8 phenological mosaics and regression tree models, and product validation. Fractional covers of nine shrubland components were estimated: annual herbaceous, bare ground, big sagebrush, herbaceous, litter, sagebrush, shrub, sagebrush height, and shrub height. Our study area included the footprint of six Landsat 8 scenes in the northwestern United States. Results show that most components have relatively significant correlations with validation data, have small normalized root mean square errors, and correspond well with expected ecological gradients. While some uncertainties remain with height estimates, the model formulated in this study provides a cross-validated, unbiased, and cost effective approach to quantify shrubland components at a regional scale and advances knowledge of horizontal and vertical variability of these components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez-Garcia, G., E-mail: gonzalo.rodriguez.garcia@usc.es; Hospido, A., E-mail: almudena.hospido@usc.es; Bagley, D.M., E-mail: bagley@uwyo.edu
2012-11-15
The main objective of this paper is to present the Direct Emissions Estimation Model (DEEM), a model for the estimation of CO{sub 2} and N{sub 2}O emissions from a wastewater treatment plant (WWTP). This model is consistent with non-specific but widely used models such as AS/AD and ASM no. 1 and presents the benefits of simplicity and application over a common WWTP simulation platform, BioWin Registered-Sign , making it suitable for Life Cycle Assessment and Carbon Footprint studies. Its application in a Spanish WWTP indicates direct N{sub 2}O emissions to be 8 times larger than those associated with electricity usemore » and thus relevant for LCA. CO{sub 2} emissions can be of similar importance to electricity-associated ones provided that 20% of them are of non-biogenic origin. - Highlights: Black-Right-Pointing-Pointer A model has been developed for the estimation of GHG emissions in WWTP. Black-Right-Pointing-Pointer Model was consistent with both ASM no. 1 and AS/AD. Black-Right-Pointing-Pointer N{sub 2}O emissions are 8 times more relevant than the one associated with electricity. Black-Right-Pointing-Pointer CO{sub 2} emissions are as important as electricity if 20% of it is non-biogenic.« less
Komatsu, Misako; Namikawa, Jun; Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka; Nakamura, Kiyohiko; Tani, Jun
2014-01-01
Many previous studies have proposed methods for quantifying neuronal interactions. However, these methods evaluated the interactions between recorded signals in an isolated network. In this study, we present a novel approach for estimating interactions between observed neuronal signals by theorizing that those signals are observed from only a part of the network that also includes unobserved structures. We propose a variant of the recurrent network model that consists of both observable and unobservable units. The observable units represent recorded neuronal activity, and the unobservable units are introduced to represent activity from unobserved structures in the network. The network structures are characterized by connective weights, i.e., the interaction intensities between individual units, which are estimated from recorded signals. We applied this model to multi-channel brain signals recorded from monkeys, and obtained robust network structures with physiological relevance. Furthermore, the network exhibited common features that portrayed cortical dynamics as inversely correlated interactions between excitatory and inhibitory populations of neurons, which are consistent with the previous view of cortical local circuits. Our results suggest that the novel concept of incorporating an unobserved structure into network estimations has theoretical advantages and could provide insights into brain dynamics beyond what can be directly observed. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Ichii, Kazuhito; Ueyama, Masahito; Kondo, Masayuki; Saigusa, Nobuko; Kim, Joon; Alberto, Ma. Carmelita; Ardö, Jonas; Euskirchen, Eugénie S.; Kang, Minseok; Hirano, Takashi; Joiner, Joanna; Kobayashi, Hideki; Marchesini, Luca Belelli; Merbold, Lutz; Miyata, Akira; Saitoh, Taku M.; Takagi, Kentaro; Varlagin, Andrej; Bret-Harte, M. Syndonia; Kitamura, Kenzo; Kosugi, Yoshiko; Kotani, Ayumi; Kumar, Kireet; Li, Sheng-Gong; Machimura, Takashi; Matsuura, Yojiro; Mizoguchi, Yasuko; Ohta, Takeshi; Mukherjee, Sandipan; Yanagi, Yuji; Yasuda, Yukio; Zhang, Yiping; Zhao, Fenghua
2017-04-01
The lack of a standardized database of eddy covariance observations has been an obstacle for data-driven estimation of terrestrial CO2 fluxes in Asia. In this study, we developed such a standardized database using 54 sites from various databases by applying consistent postprocessing for data-driven estimation of gross primary productivity (GPP) and net ecosystem CO2 exchange (NEE). Data-driven estimation was conducted by using a machine learning algorithm: support vector regression (SVR), with remote sensing data for 2000 to 2015 period. Site-level evaluation of the estimated CO2 fluxes shows that although performance varies in different vegetation and climate classifications, GPP and NEE at 8 days are reproduced (e.g., r2 = 0.73 and 0.42 for 8 day GPP and NEE). Evaluation of spatially estimated GPP with Global Ozone Monitoring Experiment 2 sensor-based Sun-induced chlorophyll fluorescence shows that monthly GPP variations at subcontinental scale were reproduced by SVR (r2 = 1.00, 0.94, 0.91, and 0.89 for Siberia, East Asia, South Asia, and Southeast Asia, respectively). Evaluation of spatially estimated NEE with net atmosphere-land CO2 fluxes of Greenhouse Gases Observing Satellite (GOSAT) Level 4A product shows that monthly variations of these data were consistent in Siberia and East Asia; meanwhile, inconsistency was found in South Asia and Southeast Asia. Furthermore, differences in the land CO2 fluxes from SVR-NEE and GOSAT Level 4A were partially explained by accounting for the differences in the definition of land CO2 fluxes. These data-driven estimates can provide a new opportunity to assess CO2 fluxes in Asia and evaluate and constrain terrestrial ecosystem models.
Heritability estimates on resting state fMRI data using ENIGMA analysis pipeline.
Adhikari, Bhim M; Jahanshad, Neda; Shukla, Dinesh; Glahn, David C; Blangero, John; Reynolds, Richard C; Cox, Robert W; Fieremans, Els; Veraart, Jelle; Novikov, Dmitry S; Nichols, Thomas E; Hong, L Elliot; Thompson, Paul M; Kochunov, Peter
2018-01-01
Big data initiatives such as the Enhancing NeuroImaging Genetics through Meta-Analysis consortium (ENIGMA), combine data collected by independent studies worldwide to achieve more generalizable estimates of effect sizes and more reliable and reproducible outcomes. Such efforts require harmonized image analyses protocols to extract phenotypes consistently. This harmonization is particularly challenging for resting state fMRI due to the wide variability of acquisition protocols and scanner platforms; this leads to site-to-site variance in quality, resolution and temporal signal-to-noise ratio (tSNR). An effective harmonization should provide optimal measures for data of different qualities. We developed a multi-site rsfMRI analysis pipeline to allow research groups around the world to process rsfMRI scans in a harmonized way, to extract consistent and quantitative measurements of connectivity and to perform coordinated statistical tests. We used the single-modality ENIGMA rsfMRI preprocessing pipeline based on modelfree Marchenko-Pastur PCA based denoising to verify and replicate resting state network heritability estimates. We analyzed two independent cohorts, GOBS (Genetics of Brain Structure) and HCP (the Human Connectome Project), which collected data using conventional and connectomics oriented fMRI protocols, respectively. We used seed-based connectivity and dual-regression approaches to show that the rsfMRI signal is consistently heritable across twenty major functional network measures. Heritability values of 20-40% were observed across both cohorts.
NASA Astrophysics Data System (ADS)
Yang, Fan; Lu, Hui; Yang, Kun; He, Jie; Wang, Wei; Wright, Jonathon S.; Li, Chengwei; Han, Menglei; Li, Yishan
2017-11-01
Precipitation and shortwave radiation play important roles in climatic, hydrological and biogeochemical cycles. Several global and regional forcing data sets currently provide historical estimates of these two variables over China, including the Global Land Data Assimilation System (GLDAS), the China Meteorological Administration (CMA) Land Data Assimilation System (CLDAS) and the China Meteorological Forcing Dataset (CMFD). The CN05.1 precipitation data set, a gridded analysis based on CMA gauge observations, also provides high-resolution historical precipitation data for China. In this study, we present an intercomparison of precipitation and shortwave radiation data from CN05.1, CMFD, CLDAS and GLDAS during 2008-2014. We also validate all four data sets against independent ground station observations. All four forcing data sets capture the spatial distribution of precipitation over major land areas of China, although CLDAS indicates smaller annual-mean precipitation amounts than CN05.1, CMFD or GLDAS. Time series of precipitation anomalies are largely consistent among the data sets, except for a sudden decrease in CMFD after August 2014. All forcing data indicate greater temporal variations relative to the mean in dry regions than in wet regions. Validation against independent precipitation observations provided by the Ministry of Water Resources (MWR) in the middle and lower reaches of the Yangtze River indicates that CLDAS provides the most realistic estimates of spatiotemporal variability in precipitation in this region. CMFD also performs well with respect to annual mean precipitation, while GLDAS fails to accurately capture much of the spatiotemporal variability and CN05.1 contains significant high biases relative to the MWR observations. Estimates of shortwave radiation from CMFD are largely consistent with station observations, while CLDAS and GLDAS greatly overestimate shortwave radiation. All three forcing data sets capture the key features of the spatial distribution, but estimates from CLDAS and GLDAS are systematically higher than those from CMFD over most of mainland China. Based on our evaluation metrics, CLDAS slightly outperforms GLDAS. CLDAS is also closer than GLDAS to CMFD with respect to temporal variations in shortwave radiation anomalies, with substantial differences among the time series. Differences in temporal variations are especially pronounced south of 34° N. Our findings provide valuable guidance for a variety of stakeholders, including land-surface modelers and data providers.
NASA Astrophysics Data System (ADS)
D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice
2018-05-01
In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.
A Computer Code for Gas Turbine Engine Weight And Disk Life Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Ghosn, Louis J.; Halliwell, Ian; Wickenheiser, Tim (Technical Monitor)
2002-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. In this paper, the major enhancements to NASA's engine-weight estimate computer code (WATE) are described. These enhancements include the incorporation of improved weight-calculation routines for the compressor and turbine disks using the finite-difference technique. Furthermore, the stress distribution for various disk geometries was also incorporated, for a life-prediction module to calculate disk life. A material database, consisting of the material data of most of the commonly-used aerospace materials, has also been incorporated into WATE. Collectively, these enhancements provide a more realistic and systematic way to calculate the engine weight. They also provide additional insight into the design trade-off between engine life and engine weight. To demonstrate the new capabilities, the enhanced WATE code is used to perform an engine weight/life trade-off assessment on a production aircraft engine.
Discriminative parameter estimation for random walks segmentation.
Baudin, Pierre-Yves; Goodman, Danny; Kumrnar, Puneet; Azzabou, Noura; Carlier, Pierre G; Paragios, Nikos; Kumar, M Pawan
2013-01-01
The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.
Large historical growth in global terrestrial gross primary production
Campbell, J. E.; Berry, J. A.; Seibt, U.; ...
2017-04-05
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
Large historical growth in global terrestrial gross primary production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, J. E.; Berry, J. A.; Seibt, U.
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
NASA Astrophysics Data System (ADS)
Zaccheo, T. S.; Pernini, T.; Botos, C.; Dobler, J. T.; Blume, N.; Braun, M.; Levine, Z. H.; Pintar, A. L.
2014-12-01
This work presents a methodology for constructing 2D estimates of CO2 field concentrations from integrated open path measurements of CO2 concentrations. It provides a description of the methodology, an assessment based on simulated data and results from preliminary field trials. The Greenhouse gas Laser Imaging Tomography Experiment (GreenLITE) system, currently under development by Exelis and AER, consists of a set of laser-based transceivers and a number of retro-reflectors coupled with a cloud-based compute environment to enable real-time monitoring of integrated CO2 path concentrations, and provides 2D maps of estimated concentrations over an extended area of interest. The GreenLITE transceiver-reflector pairs provide laser absorption spectroscopy (LAS) measurements of differential absorption due to CO2 along intersecting chords within the field of interest. These differential absorption values for the intersecting chords of horizontal path are not only used to construct estimated values of integrated concentration, but also employed in an optimal estimation technique to derive 2D maps of underlying concentration fields. This optimal estimation technique combines these sparse data with in situ measurements of wind speed/direction and an analytic plume model to provide tomographic-like reconstruction of the field of interest. This work provides an assessment of this reconstruction method and preliminary results from the Fall 2014 testing at the Zero Emissions Research and Technology (ZERT) site in Bozeman, Montana. This work is funded in part under the GreenLITE program developed under a cooperative agreement between Exelis and the National Energy and Technology Laboratory (NETL) under the Department of Energy (DOE), contract # DE-FE0012574. Atmospheric and Environmental Research, Inc. is a major partner in this development.
Structural concepts for large solar concentrators
NASA Technical Reports Server (NTRS)
Hedgepeth, J. M.; Miller, R. K.
1986-01-01
Solar collectors for space use are examined, including both early designs and current concepts. In particular, attention is given to stiff sandwich panels and aluminum dishes as well as inflated and umbrella-type membrane configurations. The Sunflower concentrator is described as an example of a high-efficiency collector. It is concluded that stiff reflector panels are most likely to provide the long-term consistent accuracy necessary for low-orbit operation. A new configuration consisting of a Pactruss backup structure, with identical panels installed after deployment in space, is presented. It is estimated that concentration ratios in excess of 2000 can be achieved with this concept.
Quark matter droplets in neutron stars
NASA Technical Reports Server (NTRS)
Heiselberg, H.; Pethick, C. J.; Staubo, E. F.
1993-01-01
We show that, for physically reasonable bulk and surface properties, the lowest energy state of dense matter consists of quark matter coexisting with nuclear matter in the presence of an essentially uniform background of electrons. We estimate the size and nature of spatial structure in this phase, and show that at the lowest densities the quark matter forms droplets embedded in nuclear matter, whereas at higher densities it can exhibit a variety of different topologies. A finite fraction of the interior of neutron stars could consist of matter in this new phase, which would provide new mechanisms for glitches and cooling.
Income convergence in a rural, majority African American region
Buddhi Gyawali; Rory Fraser; James Bukenya; John Schelhas
2008-01-01
This paper revisits the issue of income convergence by examining the question of whether poorer Census Block Groups have been catching up with wealthier Census Block Groups over the 1980-2000 period. The dataset consists of 161 Census Block Groups in Alabamaâs west-central Black Belt region. Estimates of a spatial lag model provide support for the conditional...
Reanalysis of Water, Land Use, and Production Data for Assessing China's Agricultural Resources
NASA Astrophysics Data System (ADS)
Smith, T.; Pan, J.; McLaughlin, D.
2016-12-01
Quantitative data about water availability, crop evapotranspiration (ET), agricultural land use, and production are needed at high temporal and spatial resolutions to develop sustainable water and agricultural plan and policies. However, large-scale high-resolution measured data can be susceptible to errors, physically inconsistent, or incomplete. Reanalysis provides a way to develop improved physically consistent estimates of both measured and hidden variables. The reanalysis approach described here uses a least-squares technique constrained by water balances and crop water requirements to assimilate many possibly redundant data sources to yield estimates of water, land use, and food production variables that are physically consistent while minimizing differences from measured data. As an example, this methodology is applied in China, where food demand is expected to increase but land and water resources could constrain further increases in food production. Hydrologic fluxes, crop ET, agricultural land use, yields, and food production are characterized at 0.5o by 0.5o resolution for a nominal year around the year 2000 for 22 different crop groups. The reanalysis approach provides useful information for resource management and policy, both in China and around the world.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.
Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis
2017-10-16
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.
Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods
Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.
2017-01-01
Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333
A Portuguese value set for the SF-6D.
Ferreira, Lara N; Ferreira, Pedro L; Pereira, Luis N; Brazier, John; Rowen, Donna
2010-08-01
The SF-6D is a preference-based measure of health derived from the SF-36 that can be used for cost-effectiveness analysis using cost-per-quality adjusted life-year analysis. This study seeks to estimate a system weight for the SF-6D for Portugal and to compare the results with the UK system weights. A sample of 55 health states defined by the SF-6D has been valued by a representative random sample of the Portuguese population, stratified by sex and age (n = 140), using the Standard Gamble (SG). Several models are estimated at both the individual and aggregate levels for predicting health-state valuations. Models with main effects, with interaction effects and with the constant forced to unity are presented. Random effects (RE) models are estimated using generalized least squares (GLS) regressions. Generalized estimation equations (GEE) are used to estimate RE models with the constant forced to unity. Estimations at the individual level were performed using 630 health-state valuations. Alternative functional forms are considered to account for the skewed distribution of health-state valuations. The models are analyzed in terms of their coefficients, overall fit, and the ability for predicting the SG-values. The RE models estimated using GLS and through GEE produce significant coefficients, which are robust across model specification. However, there are concerns regarding some inconsistent estimates, and so parsimonious consistent models were estimated. There is evidence of under prediction in some states assigned to poor health. The results are consistent with the UK results. The models estimated provide preference-based quality of life weights for the Portuguese population when health status data have been collected using the SF-36. Although the sample was randomly drowned findings should be treated with caution, given the small sample size, even knowing that they have been estimated at the individual level.
Raebel, Marsha A; Schmittdiel, Julie; Karter, Andrew J; Konieczny, Jennifer L; Steiner, John F
2013-08-01
To propose a unifying set of definitions for prescription adherence research utilizing electronic health record prescribing databases, prescription dispensing databases, and pharmacy claims databases and to provide a conceptual framework to operationalize these definitions consistently across studies. We reviewed recent literature to identify definitions in electronic database studies of prescription-filling patterns for chronic oral medications. We then develop a conceptual model and propose standardized terminology and definitions to describe prescription-filling behavior from electronic databases. The conceptual model we propose defines 2 separate constructs: medication adherence and persistence. We define primary and secondary adherence as distinct subtypes of adherence. Metrics for estimating secondary adherence are discussed and critiqued, including a newer metric (New Prescription Medication Gap measure) that enables estimation of both primary and secondary adherence. Terminology currently used in prescription adherence research employing electronic databases lacks consistency. We propose a clear, consistent, broadly applicable conceptual model and terminology for such studies. The model and definitions facilitate research utilizing electronic medication prescribing, dispensing, and/or claims databases and encompasses the entire continuum of prescription-filling behavior. Employing conceptually clear and consistent terminology to define medication adherence and persistence will facilitate future comparative effectiveness research and meta-analytic studies that utilize electronic prescription and dispensing records.
Schriger, David L; Menchine, Michael; Wiechmann, Warren; Carmelli, Guy
2018-04-20
We conducted this study to better understand how emergency physicians estimate risk and make admission decisions for patients with low-risk chest pain. We created a Web-based survey consisting of 5 chest pain scenarios that included history, physical examination, ECG findings, and basic laboratory studies, including a negative initial troponin-level result. We administered the scenarios in random order to emergency medicine residents and faculty at 11 US emergency medicine residency programs. We randomized respondents to receive questions about 1 of 2 endpoints, acute coronary syndrome or serious complication (death, dysrhythmia, or congestive heart failure within 30 days). For each scenario, the respondent provided a quantitative estimate of the probability of the endpoint, a qualitative estimate of the risk of the endpoint (very low, low, moderate, high, or very high), and an admission decision. Respondents also provided demographic information and completed a 3-item Fear of Malpractice scale. Two hundred eight (65%) of 320 eligible physicians completed the survey, 73% of whom were residents. Ninety-five percent of respondents were wholly consistent (no admitted patient was assigned a lower probability than a discharged patient). For individual scenarios, probability estimates covered at least 4 orders of magnitude; admission rates for scenarios varied from 16% to 99%. The majority of respondents (>72%) had admission thresholds at or below a 1% probability of acute coronary syndrome. Respondents did not fully differentiate the probability of acute coronary syndrome and serious outcome; for each scenario, estimates for the two were quite similar despite a serious outcome being far less likely. Raters used the terms "very low risk" and "low risk" only when their probability estimates were less than 1%. The majority of respondents considered any probability greater than 1% for acute coronary syndrome or serious outcome to be at least moderate risk and warranting admission. Physicians used qualitative terms in ways fundamentally different from how they are used in ordinary conversation, which may lead to miscommunication during shared decisionmaking processes. These data suggest that probability or utility models are inadequate to describe physician decisionmaking for patients with chest pain. Copyright © 2018 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Personalized recommendation based on unbiased consistence
NASA Astrophysics Data System (ADS)
Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao
2015-08-01
Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.
A non-parametric consistency test of the ΛCDM model with Planck CMB data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghamousa, Amir; Shafieloo, Arman; Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr
Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation ofmore » the base ΛCDM model as cosmology's gold standard.« less
Devleesschauwer, Brecht; Aspinall, Willy; Cooke, Roger; Corrigan, Tim; Havelaar, Arie; Angulo, Frederick; Gibb, Herman; Kirk, Martyn; Lake, Robin; Speybroeck, Niko; Torgerson, Paul; Hald, Tine
2017-01-01
Background Recently the World Health Organization, Foodborne Disease Burden Epidemiology Reference Group (FERG) estimated that 31 foodborne diseases (FBDs) resulted in over 600 million illnesses and 420,000 deaths worldwide in 2010. Knowing the relative role importance of different foods as exposure routes for key hazards is critical to preventing illness. This study reports the findings of a structured expert elicitation providing globally comparable food source attribution estimates for 11 major FBDs in each of 14 world subregions. Methods and findings We used Cooke’s Classical Model to elicit and aggregate judgments of 73 international experts. Judgments were elicited from each expert individually and aggregated using both equal and performance weights. Performance weighted results are reported as they increased the informativeness of estimates, while retaining accuracy. We report measures of central tendency and uncertainty bounds on food source attribution estimate. For some pathogens we see relatively consistent food source attribution estimates across subregions of the world; for others there is substantial regional variation. For example, for non-typhoidal salmonellosis, pork was of minor importance compared to eggs and poultry meat in the American and African subregions, whereas in the European and Western Pacific subregions the importance of these three food sources were quite similar. Our regional results broadly agree with estimates from earlier European and North American food source attribution research. As in prior food source attribution research, we find relatively wide uncertainty bounds around our median estimates. Conclusions We present the first worldwide estimates of the proportion of specific foodborne diseases attributable to specific food exposure routes. While we find substantial uncertainty around central tendency estimates, we believe these estimates provide the best currently available basis on which to link FBDs and specific foods in many parts of the world, providing guidance for policy actions to control FBDs. PMID:28910293
A Bayesian Machine Learning Model for Estimating Building Occupancy from Open Source Data
Stewart, Robert N.; Urban, Marie L.; Duchscherer, Samantha E.; ...
2016-01-01
Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the artmore » by introducing the Population Data Tables (PDT), a Bayesian based informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000ft2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the art by introducing the Population Data Tables (PDT), a Bayesian model and informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000 ft 2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.« less
An atlas of ShakeMaps for selected global earthquakes
Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul S.; Marano, Kristin D.
2008-01-01
An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.
An update to the analysis of the Canadian Spatial Reference System
NASA Astrophysics Data System (ADS)
Ferland, R.; Piraszewski, M.; Craymer, M.
2015-12-01
The primary objective of the Canadian Spatial Reference System (CSRS) is to provide users access to a consistent geo-referencing infrastructure over the Canadian landmass. Global Navigation Satellite System (GNSS) positioning accuracy requirements ranges from meter level to mm level (e.g.: crustal deformation). The highest level of the Canadian infrastructure consist of a network of continually operating GPS and GNSS receivers, referred to as active control stations. The network includes all Canadian public active control stations, some bordering US CORS and Alaska stations, Greenland active control stations, as well as a selection of IGS reference frame stations. The Bernese analysis software is used for the daily processing and the combination into weekly solutions which form the basis for this analysis. IGS weekly final orbit, Earth Rotation parameters (ERP's) and coordinates products are used in the processing. For the more demanding users, the time dependant changes of station coordinates is often more important.All station coordinate estimates and related covariance information is used in this analysis. For each input solution, variance factor, translation, rotation and scale (and if needed their rates) or subsets of these are estimated. In the combination of these weekly solutions, station positions and velocities are estimated. Since the time series from the stations in these networks often experience changes in behavior, new (or reuse of) parameters are generally used in these situations. As is often the case with real data, unrealistic coordinates may occur. Automatic detection and removal of outliers is used in these cases. For the transformation, position and velocity parameters loose apriori estimates and uncertainties are provided. Alignment using the usual Helmert transformation to the latest IGb08 realization of ITRF is also performed during the adjustment.
Planck 2015 results. XXIV. Cosmology from Sunyaev-Zeldovich cluster counts
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Battye, R.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Comis, B.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Weller, J.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
We present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing of background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. Improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.
Olexa, Edward M.; Lawrence, Rick L
2014-01-01
Federal land management agencies provide stewardship over much of the rangelands in the arid andsemi-arid western United States, but they often lack data of the proper spatiotemporal resolution andextent needed to assess range conditions and monitor trends. Recent advances in the blending of com-plementary, remotely sensed data could provide public lands managers with the needed information.We applied the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) to five Landsat TMand concurrent Terra MODIS scenes, and used pixel-based regression and difference image analyses toevaluate the quality of synthetic reflectance and NDVI products associated with semi-arid rangeland. Pre-dicted red reflectance data consistently demonstrated higher accuracy, less bias, and stronger correlationwith observed data than did analogous near-infrared (NIR) data. The accuracy of both bands tended todecline as the lag between base and prediction dates increased; however, mean absolute errors (MAE)were typically ≤10%. The quality of area-wide NDVI estimates was less consistent than either spectra lband, although the MAE of estimates predicted using early season base pairs were ≤10% throughout the growing season. Correlation between known and predicted NDVI values and agreement with the 1:1regression line tended to decline as the prediction lag increased. Further analyses of NDVI predictions,based on a 22 June base pair and stratified by land cover/land use (LCLU), revealed accurate estimates through the growing season; however, inter-class performance varied. This work demonstrates the successful application of the STARFM algorithm to semi-arid rangeland; however, we encourage evaluation of STARFM’s performance on a per product basis, stratified by LCLU, with attention given to the influence of base pair selection and the impact of the time lag.
Planck 2015 results: XXIV. Cosmology from Sunyaev-Zeldovich cluster counts
Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...
2016-09-20
In this work, we present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing ofmore » background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. In conclusion, improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.« less
Planck 2015 results: XXIV. Cosmology from Sunyaev-Zeldovich cluster counts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Aghanim, N.; Arnaud, M.
In this work, we present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing ofmore » background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. In conclusion, improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.« less
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
Accuracy and variability of tumor burden measurement on multi-parametric MRI
NASA Astrophysics Data System (ADS)
Salarian, Mehrnoush; Gibson, Eli; Shahedi, Maysam; Gaed, Mena; Gómez, José A.; Moussa, Madeleine; Romagnoli, Cesare; Cool, Derek W.; Bastian-Jordan, Matthew; Chin, Joseph L.; Pautler, Stephen; Bauman, Glenn S.; Ward, Aaron D.
2014-03-01
Measurement of prostate tumour volume can inform prognosis and treatment selection, including an assessment of the suitability and feasibility of focal therapy, which can potentially spare patients the deleterious side effects of radical treatment. Prostate biopsy is the clinical standard for diagnosis but provides limited information regarding tumour volume due to sparse tissue sampling. A non-invasive means for accurate determination of tumour burden could be of clinical value and an important step toward reduction of overtreatment. Multi-parametric magnetic resonance imaging (MPMRI) is showing promise for prostate cancer diagnosis. However, the accuracy and inter-observer variability of prostate tumour volume estimation based on separate expert contouring of T2-weighted (T2W), dynamic contrastenhanced (DCE), and diffusion-weighted (DW) MRI sequences acquired using an endorectal coil at 3T is currently unknown. We investigated this question using a histologic reference standard based on a highly accurate MPMRIhistology image registration and a smooth interpolation of planimetric tumour measurements on histology. Our results showed that prostate tumour volumes estimated based on MPMRI consistently overestimated histological reference tumour volumes. The variability of tumour volume estimates across the different pulse sequences exceeded interobserver variability within any sequence. Tumour volume estimates on DCE MRI provided the lowest inter-observer variability and the highest correlation with histology tumour volumes, whereas the apparent diffusion coefficient (ADC) maps provided the lowest volume estimation error. If validated on a larger data set, the observed correlations could support the development of automated prostate tumour volume segmentation algorithms as well as correction schemes for tumour burden estimation on MPMRI.
Developing Daily Quantitative Damage Estimates From Geospatial Layers To Support Post Event Recovery
NASA Astrophysics Data System (ADS)
Woods, B. K.; Wei, L. H.; Connor, T. C.
2014-12-01
With the growth of natural hazard data available in near real-time it is increasingly feasible to deliver damage estimates caused by natural disasters. These estimates can be used in disaster management setting or by commercial entities to optimize the deployment of resources and/or routing of goods and materials. This work outlines an end-to-end, modular process to generate estimates of damage caused by severe weather. The processing stream consists of five generic components: 1) Hazard modules that provide quantitate data layers for each peril. 2) Standardized methods to map the hazard data to an exposure layer based on atomic geospatial blocks. 3) Peril-specific damage functions that compute damage metrics at the atomic geospatial block level. 4) Standardized data aggregators, which map damage to user-specific geometries. 5) Data dissemination modules, which provide resulting damage estimates in a variety of output forms. This presentation provides a description of this generic tool set, and an illustrated example using HWRF-based hazard data for Hurricane Arthur (2014). In this example, the Python-based real-time processing ingests GRIB2 output from the HWRF numerical model, dynamically downscales it in conjunctions with a land cover database using a multiprocessing pool, and a just-in-time compiler (JIT). The resulting wind fields are contoured, and ingested into a PostGIS database using OGR. Finally, the damage estimates are calculated at the atomic block level and aggregated to user-defined regions using PostgreSQL queries to construct application specific tabular and graphics output.
A double-gaussian, percentile-based method for estimating maximum blood flow velocity.
Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D
2013-11-01
Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.
Contact Force Compensated Thermal Stimulators for Holistic Haptic Interfaces.
Sim, Jai Kyoung; Cho, Young-Ho
2016-05-01
We present a contact force compensated thermal stimulator that can provide a consistent tempera- ture sensation on the human skin independent of the contact force between the thermal stimulator and the skin. Previous passive thermal stimulators were not capable of providing a consistent tem- perature on the human skin even when using identical heat source voltage due to an inconsistency of the heat conduction, which changes due to the force-dependent thermal contact resistance. We propose a force-based feedback method that monitors the contact force and controls the heat source voltage according to this contact force, thus providing consistent temperature on the skin. We composed a heat circuit model equivalent to the skin heat-transfer rate as it is changed by the contact forces; we obtained the optimal voltage condition for the constant skin heat-transfer rate independent of the contact force using a numerical estimation simulation tool. Then, in the experiment, we heated real human skin at the obtained heat source voltage condition, and investigated the skin heat transfer-rate by measuring the skin temperature at various times at different levels of contact force. In the numerical estimation results, the skin heat-transfer rate for the contact forces showed a linear profile in the contact force range of 1-3 N; from this profile we obtained the voltage equation for heat source control. In the experimental study, we adjusted the heat source voltage according to the contact force based on the obtained equation. As a result, without the heat source voltage control for the contact forces, the coefficients of variation (CV) of the skin heat-transfer rate in the contact force range of 1-3 N was found to be 11.9%. On the other hand, with the heat source voltage control for the contact forces, the CV of the skin heat-transfer rate in the contact force range of 1-3 N was found to be barely 2.0%, which indicate an 83.2% improvement in consistency compared to the skin heat-transfer rate without the heat source voltage control. The present technique provides a consistent temperature sensation on the human skin independent of the body movement environment; therefore, it has high potential for use in holistic haptic interfaces that have thermal displays.
Using isotopic dilution to assess chemical extraction of labile Ni, Cu, Zn, Cd and Pb in soils.
Garforth, J M; Bailey, E H; Tye, A M; Young, S D; Lofts, S
2016-07-01
Chemical extractants used to measure labile soil metal must ideally select for and solubilise the labile fraction, with minimal solubilisation of non-labile metal. We assessed four extractants (0.43 M HNO3, 0.43 M CH3COOH, 0.05 M Na2H2EDTA and 1 M CaCl2) against these requirements. For soils contaminated by contrasting sources, we compared isotopically exchangeable Ni, Cu, Zn, Cd and Pb (EValue, mg kg(-1)), with the concentrations of metal solubilised by the chemical extractants (MExt, mg kg(-1)). Crucially, we also determined isotopically exchangeable metal in the soil-extractant systems (EExt, mg kg(-1)). Thus 'EExt - EValue' quantifies the concentration of mobilised non-labile metal, while 'EExt - MExt' represents adsorbed labile metal in the presence of the extractant. Extraction with CaCl2 consistently underestimated EValue for Ni, Cu, Zn and Pb, while providing a reasonable estimate of EValue for Cd. In contrast, extraction with HNO3 both consistently mobilised non-labile metal and overestimated the EValue. Extraction with CH3COOH appeared to provide a good estimate of EValue for Cd; however, this was the net outcome of incomplete solubilisation of labile metal, and concurrent mobilisation of non-labile metal by the extractant (MExt
The size of the irregular migrant population in the European Union – counting the uncountable?
Vogel, Dita; Kovacheva, Vesela; Prescott, Hannah
2011-01-01
It is difficult to estimate the size of the irregular migrant population in a specific city or country, and even more difficult to arrive at estimates at the European level. A review of past attempts at European-level estimates reveals that they rely on rough and outdated rules-of-thumb. In this paper, we present our own European level estimates for 2002, 2005, and 2008. We aggregate country-specific information, aiming at approximate comparability by consistent use of minimum and maximum estimates and by adjusting for obvious differences in definition and timescale. While the aggregated estimates are not considered highly reliable, they do -- for the first time -- provide transparency. The provision of more systematic medium quality estimates is shown to be the most promising way for improvement. The presented estimate indicates a minimum of 1.9 million and a maximum of 3.8 million irregular foreign residents in the 27 member states of the European Union (2008). Unlike rules-of-thumb, the aggregated EU estimates indicate a decline in the number of irregular foreign residents between 2002 and 2008. This decline has been influenced by the EU enlargement and legalisation programmes.
On estimation of time-dependent attributable fraction from population-based case-control studies.
Zhao, Wei; Chen, Ying Qing; Hsu, Li
2017-09-01
Population attributable fraction (PAF) is widely used to quantify the disease burden associated with a modifiable exposure in a population. It has been extended to a time-varying measure that provides additional information on when and how the exposure's impact varies over time for cohort studies. However, there is no estimation procedure for PAF using data that are collected from population-based case-control studies, which, because of time and cost efficiency, are commonly used for studying genetic and environmental risk factors of disease incidences. In this article, we show that time-varying PAF is identifiable from a case-control study and develop a novel estimator of PAF. Our estimator combines odds ratio estimates from logistic regression models and density estimates of the risk factor distribution conditional on failure times in cases from a kernel smoother. The proposed estimator is shown to be consistent and asymptotically normal with asymptotic variance that can be estimated empirically from the data. Simulation studies demonstrate that the proposed estimator performs well in finite sample sizes. Finally, the method is illustrated by a population-based case-control study of colorectal cancer. © 2017, The International Biometric Society.
Physical Validation of TRMM TMI and PR Monthly Rain Products Over Oklahoma
NASA Technical Reports Server (NTRS)
Fisher, Brad L.
2004-01-01
The Tropical Rainfall Measuring Mission (TRMM) provides monthly rainfall estimates using data collected by the TRMM satellite. These estimates cover a substantial fraction of the earth's surface. The physical validation of TRMM estimates involves corroborating the accuracy of spaceborne estimates of areal rainfall by inferring errors and biases from ground-based rain estimates. The TRMM error budget consists of two major sources of error: retrieval and sampling. Sampling errors are intrinsic to the process of estimating monthly rainfall and occur because the satellite extrapolates monthly rainfall from a small subset of measurements collected only during satellite overpasses. Retrieval errors, on the other hand, are related to the process of collecting measurements while the satellite is overhead. One of the big challenges confronting the TRMM validation effort is how to best estimate these two main components of the TRMM error budget, which are not easily decoupled. This four-year study computed bulk sampling and retrieval errors for the TRMM microwave imager (TMI) and the precipitation radar (PR) by applying a technique that sub-samples gauge data at TRMM overpass times. Gridded monthly rain estimates are then computed from the monthly bulk statistics of the collected samples, providing a sensor-dependent gauge rain estimate that is assumed to include a TRMM equivalent sampling error. The sub-sampled gauge rain estimates are then used in conjunction with the monthly satellite and gauge (without sub- sampling) estimates to decouple retrieval and sampling errors. The computed mean sampling errors for the TMI and PR were 5.9% and 7.796, respectively, in good agreement with theoretical predictions. The PR year-to-year retrieval biases exceeded corresponding TMI biases, but it was found that these differences were partially due to negative TMI biases during cold months and positive TMI biases during warm months.
Data-Adaptive Bias-Reduced Doubly Robust Estimation.
Vermeulen, Karel; Vansteelandt, Stijn
2016-05-01
Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.
Estimation of dynamic stability parameters from drop model flight tests
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Iliff, K. W.
1981-01-01
A recent NASA application of a remotely-piloted drop model to studies of the high angle-of-attack and spinning characteristics of a fighter configuration has provided an opportunity to evaluate and develop parameter estimation methods for the complex aerodynamic environment associated with high angles of attack. The paper discusses the overall drop model operation including descriptions of the model, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods used. Static and dynamic stability derivatives were obtained for an angle-of-attack range from -20 deg to 53 deg. The results of the study indicated that the variations of the estimates with angle of attack were consistent for most of the static derivatives, and the effects of configuration modifications to the model (such as nose strakes) were apparent in the static derivative estimates. The dynamic derivatives exhibited greater uncertainty levels than the static derivatives, possibly due to nonlinear aerodynamics, model response characteristics, or additional derivatives.
Limitations and opportunities for the social cost of carbon (Invited)
NASA Astrophysics Data System (ADS)
Rose, S. K.
2010-12-01
Estimates of the marginal value of carbon dioxide-the social cost of carbon (SCC)-were recently adopted by the U.S. Government in order to satisfy requirements to value estimated GHG changes of new federal regulations. However, the development and use of SCC estimates of avoided climate change impacts comes with significant challenges and controversial decisions. Fortunately, economics can provide some guidance for conceptually appropriate estimates. At the same time, economics defaults to a benefit-cost decision framework to identify socially optimal policies. However, not all current policy decisions are benefit-cost based, nor depend on monetized information, or even have the same threshold for information. While a conceptually appropriate SCC is a useful metric, how far can we take it? This talk discusses potential applications of the SCC, limitations based on the state of research and methods, as well as opportunities for among other things consistency with climate risk management and research and decision-making tools.
Kalman Filters for Time Delay of Arrival-Based Source Localization
NASA Astrophysics Data System (ADS)
Klee, Ulrich; Gehrig, Tobias; McDonough, John
2006-12-01
In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.
A hierarchical model for estimating change in American Woodcock populations
Sauer, J.R.; Link, W.A.; Kendall, W.L.; Kelley, J.R.; Niven, D.K.
2008-01-01
The Singing-Ground Survey (SGS) is a primary source of information on population change for American woodcock (Scolopax minor). We analyzed the SGS using a hierarchical log-linear model and compared the estimates of change and annual indices of abundance to a route regression analysis of SGS data. We also grouped SGS routes into Bird Conservation Regions (BCRs) and estimated population change and annual indices using BCRs within states and provinces as strata. Based on the hierarchical model?based estimates, we concluded that woodcock populations were declining in North America between 1968 and 2006 (trend = -0.9%/yr, 95% credible interval: -1.2, -0.5). Singing-Ground Survey results are generally similar between analytical approaches, but the hierarchical model has several important advantages over the route regression. Hierarchical models better accommodate changes in survey efficiency over time and space by treating strata, years, and observers as random effects in the context of a log-linear model, providing trend estimates that are derived directly from the annual indices. We also conducted a hierarchical model analysis of woodcock data from the Christmas Bird Count and the North American Breeding Bird Survey. All surveys showed general consistency in patterns of population change, but the SGS had the shortest credible intervals. We suggest that population management and conservation planning for woodcock involving interpretation of the SGS use estimates provided by the hierarchical model.
L'her, Erwan; Martin-Babau, Jérôme; Lellouche, François
2016-12-01
Knowledge of patients' height is essential for daily practice in the intensive care unit. However, actual height measurements are unavailable on a daily routine in the ICU and measured height in the supine position and/or visual estimates may lack consistency. Clinicians do need simple and rapid methods to estimate the patients' height, especially in short height and/or obese patients. The objectives of the study were to evaluate several anthropometric formulas for height estimation on healthy volunteers and to test whether several of these estimates will help tidal volume setting in ICU patients. This was a prospective, observational study in a medical intensive care unit of a university hospital. During the first phase of the study, eight limb measurements were performed on 60 healthy volunteers and 18 height estimation formulas were tested. During the second phase, four height estimates were performed on 60 consecutive ICU patients under mechanical ventilation. In the 60 healthy volunteers, actual height was well correlated with the gold standard, measured height in the erect position. Correlation was low between actual and calculated height, using the hand's length and width, the index, or the foot equations. The Chumlea method and its simplified version, performed in the supine position, provided adequate estimates. In the 60 ICU patients, calculated height using the simplified Chumlea method was well correlated with measured height (r = 0.78; ∂ < 1 %). Ulna and tibia estimates also provided valuable estimates. All these height estimates allowed calculating IBW or PBW that were significantly different from the patients' actual weight on admission. In most cases, tidal volume set according to these estimates was lower than what would have been set using the actual weight. When actual height is unavailable in ICU patients undergoing mechanical ventilation, alternative anthropometric methods to obtain patient's height based on lower leg and on forearm measurements could be useful to facilitate the application of protective mechanical ventilation in a Caucasian ICU population. The simplified Chumlea method is easy to achieve in a bed-ridden patient and provides accurate height estimates, with a low bias.
Fast 5DOF needle tracking in iOCT.
Weiss, Jakob; Rieke, Nicola; Nasseri, Mohammad Ali; Maier, Mathias; Eslami, Abouzar; Navab, Nassir
2018-06-01
Intraoperative optical coherence tomography (iOCT) is an increasingly available imaging technique for ophthalmic microsurgery that provides high-resolution cross-sectional information of the surgical scene. We propose to build on its desirable qualities and present a method for tracking the orientation and location of a surgical needle. Thereby, we enable the direct analysis of instrument-tissue interaction directly in OCT space without complex multimodal calibration that would be required with traditional instrument tracking methods. The intersection of the needle with the iOCT scan is detected by a peculiar multistep ellipse fitting that takes advantage of the directionality of the modality. The geometric modeling allows us to use the ellipse parameters and provide them into a latency-aware estimator to infer the 5DOF pose during needle movement. Experiments on phantom data and ex vivo porcine eyes indicate that the algorithm retains angular precision especially during lateral needle movement and provides a more robust and consistent estimation than baseline methods. Using solely cross-sectional iOCT information, we are able to successfully and robustly estimate a 5DOF pose of the instrument in less than 5.4 ms on a CPU.
NASA Astrophysics Data System (ADS)
Yilmaz, M.; Anderson, M. C.; Zaitchik, B. F.; Crow, W. T.; Hain, C.; Ozdogan, M.; Chun, J. A.
2012-12-01
Actual evapotranspiration (ET) can be estimated using both prognostic and diagnostic modeling approaches, providing independent yet complementary information for hydrologic applications. Both approaches have advantages and disadvantages. When provided with temporally continuous atmospheric forcing data, prognostic models offer continuous sub-daily ET information together with the full set of water and energy balance fluxes and states (i.e. soil moisture, runoff, sensible and latent heat). On the other hand, the diagnostic modeling approach provides ET estimates over regions where reliable information about available soil water is not known (e.g., due to irrigation practices or shallow ground water levels not included in the prognostic model structure, unknown soil texture or plant rooting depth, etc). Prognostic model-based ET estimates are of great interest whenever consistent and complete water budget information is required or when there is a need to project ET for climate or land use change scenarios. Diagnostic models establish a stronger link to remote sensing observations, can be applied in regions with limited or questionable atmospheric forcing data, and provide valuable observation-derived information about the current land-surface state. Analysis of independently obtained ET estimates is particularly important in data poor regions. Such comparisons can help to reduce the uncertainty in the modeled ET estimates and to exclude outliers based on physical considerations. The Nile river basin is home to tens of millions of people whose daily life depends on water extracted from the river Nile. Yet the complete basin scale water balance of the Nile has been studied only a few times, and the temporal and the spatial distribution of hydrological fluxes (particularly ET) are still a subject of active research. This is due in part to a scarcity of ground-based station data for validation. In such regions, comparison between prognostic and diagnostic model output may be a valuable model evaluation tool. Motivated by the complementary information that exists in prognostic and diagnostic energy balance modeling, as well as the need for evaluation of water consumption estimates over the Nile basin, the purpose of this study is to 1) better describe the conceptual differences between prognostic and diagnostic modeling, 2) present the potential for diagnostic models to capture important hydrologic features that are not explicitly represented in prognostic model, 3) explore the differences in these two approaches over the Nile Basin, where ground data are sparse and transnational data sharing is unreliable. More specifically, we will compare output from the Noah prognostic model and the Atmosphere-Land Exchange Inverse (ALEXI) diagnostic model generated over ground truth data-poor Nile basin. Preliminary results indicate spatially, temporally, and magnitude wise consistent flux estimates for ALEXI and NOAH over irrigated Delta region, while there are differences over river-fed wetlands.
Handling of thermal paper: Implications for dermal exposure to bisphenol A and its alternatives
Bernier, Meghan R.
2017-01-01
Bisphenol A (BPA) is an endocrine disrupting chemical used in a wide range of consumer products including photoactive dyes used in thermal paper. Recent studies have shown that dermal absorption of BPA can occur when handling these papers. Yet, regulatory agencies have largely dismissed thermal paper as a major source of BPA exposure. Exposure estimates provided by agencies such as the European Food Safety Authority (EFSA) are based on assumptions about how humans interact with this material, stating that ‘typical’ exposures for adults involve only one handling per day for short periods of time (<1 minute), with limited exposure surfaces (three fingertips). The objective of this study was to determine how individuals handle thermal paper in one common setting: a cafeteria providing short-order meals. We observed thermal paper handling in a college-aged population (n = 698 subjects) at the University of Massachusetts’ dining facility. We find that in this setting, individuals handle receipts for an average of 11.5 min, that >30% of individuals hold thermal paper with more than three fingertips, and >60% allow the paper to touch their palm. Only 11% of the participants we observed were consistent with the EFSA model for time of contact and dermal surface area. Mathematical modeling based on handling times we measured and previously published transfer coefficients, concentrations of BPA in paper, and absorption factors indicate the most conservative estimated intake from handling thermal paper in this population is 51.1 ng/kg/day, similar to EFSA’s estimates of 59 ng/kg/day from dermal exposures. Less conservative estimates, using published data on concentrations in thermal paper and transfer rates to skin, indicate that exposures are likely significantly higher. Based on our observational data, we propose that the current models for estimating dermal BPA exposures are not consistent with normal human behavior and should be reevaluated. PMID:28570582
Handling of thermal paper: Implications for dermal exposure to bisphenol A and its alternatives.
Bernier, Meghan R; Vandenberg, Laura N
2017-01-01
Bisphenol A (BPA) is an endocrine disrupting chemical used in a wide range of consumer products including photoactive dyes used in thermal paper. Recent studies have shown that dermal absorption of BPA can occur when handling these papers. Yet, regulatory agencies have largely dismissed thermal paper as a major source of BPA exposure. Exposure estimates provided by agencies such as the European Food Safety Authority (EFSA) are based on assumptions about how humans interact with this material, stating that 'typical' exposures for adults involve only one handling per day for short periods of time (<1 minute), with limited exposure surfaces (three fingertips). The objective of this study was to determine how individuals handle thermal paper in one common setting: a cafeteria providing short-order meals. We observed thermal paper handling in a college-aged population (n = 698 subjects) at the University of Massachusetts' dining facility. We find that in this setting, individuals handle receipts for an average of 11.5 min, that >30% of individuals hold thermal paper with more than three fingertips, and >60% allow the paper to touch their palm. Only 11% of the participants we observed were consistent with the EFSA model for time of contact and dermal surface area. Mathematical modeling based on handling times we measured and previously published transfer coefficients, concentrations of BPA in paper, and absorption factors indicate the most conservative estimated intake from handling thermal paper in this population is 51.1 ng/kg/day, similar to EFSA's estimates of 59 ng/kg/day from dermal exposures. Less conservative estimates, using published data on concentrations in thermal paper and transfer rates to skin, indicate that exposures are likely significantly higher. Based on our observational data, we propose that the current models for estimating dermal BPA exposures are not consistent with normal human behavior and should be reevaluated.
Bodart, Catherine; Brink, Andreas B; Donnay, François; Lupi, Andrea; Mayaux, Philippe; Achard, Frédéric
2013-01-01
Aim This study provides regional estimates of forest cover in dry African ecoregions and the changes in forest cover that occurred there between 1990 and 2000, using a systematic sample of medium-resolution satellite imagery which was processed consistently across the continent. Location The study area corresponds to the dry forests and woodlands of Africa between the humid forests and the semi-arid regions. This area covers the Sudanian and Zambezian ecoregions. Methods A systematic sample of 1600 Landsat satellite imagery subsets, each 20 km × 20 km in size, were analysed for two reference years: 1990 and 2000. At each sample site and for both years, dense tree cover, open tree cover, other wooded land and other vegetation cover were identified from the analysis of satellite imagery, which comprised multidate segmentation and automatic classification steps followed by visual control by national forestry experts. Results Land cover and land-cover changes were estimated at continental and ecoregion scales and compared with existing pan-continental, regional and local studies. The overall accuracy of our land-cover maps was estimated at 87%. Between 1990 and 2000, 3.3 million hectares (Mha) of dense tree cover, 5.8 Mha of open tree cover and 8.9 Mha of other wooded land were lost, with a further 3.9 Mha degraded from dense to open tree cover. These results are substantially lower than the 34 Mha of forest loss reported in the FAO's 2010 Global Forest Resources Assessment for the same period and area. Main conclusions Our method generates the first consistent and robust estimates of forest cover and change in dry Africa with known statistical precision at continental and ecoregion scales. These results reduce the uncertainty regarding vegetation cover and its dynamics in these previously poorly studied ecosystems and provide crucial information for both science and environmental policies. PMID:23935237
Pailian, Hrag; Halberda, Justin
2015-04-01
We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.
Localization of transient gravitational wave sources: beyond triangulation
NASA Astrophysics Data System (ADS)
Fairhurst, Stephen
2018-05-01
Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.
Paretti, Nicholas V.; Kennedy, Jeffrey R.; Turney, Lovina A.; Veilleux, Andrea G.
2014-01-01
The regional regression equations were integrated into the U.S. Geological Survey’s StreamStats program. The StreamStats program is a national map-based web application that allows the public to easily access published flood frequency and basin characteristic statistics. The interactive web application allows a user to select a point within a watershed (gaged or ungaged) and retrieve flood-frequency estimates derived from the current regional regression equations and geographic information system data within the selected basin. StreamStats provides users with an efficient and accurate means for retrieving the most up to date flood frequency and basin characteristic data. StreamStats is intended to provide consistent statistics, minimize user error, and reduce the need for large datasets and costly geographic information system software.
Global gridded anthropogenic emissions inventory of carbonyl sulfide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zumkehr, Andrew; Hilton, Tim; Whelan, Mary
Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, themore » inventory is provided as annually varying estimates from years 1980–2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y -1 (range of 223–586 Gg S y -1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Lastly, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.« less
Global gridded anthropogenic emissions inventory of carbonyl sulfide
Zumkehr, Andrew; Hilton, Tim; Whelan, Mary; ...
2018-03-31
Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, themore » inventory is provided as annually varying estimates from years 1980–2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y -1 (range of 223–586 Gg S y -1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Lastly, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.« less
Global gridded anthropogenic emissions inventory of carbonyl sulfide
NASA Astrophysics Data System (ADS)
Zumkehr, Andrew; Hilton, Tim W.; Whelan, Mary; Smith, Steve; Kuai, Le; Worden, John; Campbell, J. Elliott
2018-06-01
Atmospheric carbonyl sulfide (COS or OCS) is the most abundant sulfur containing gas in the troposphere and is an atmospheric tracer for the carbon cycle. Gridded inventories of global anthropogenic COS are used for interpreting global COS measurements. However, previous gridded anthropogenic data are a climatological estimate based on input data that is over three decades old and are not representative of current conditions. Here we develop a new gridded data set of global anthropogenic COS sources that includes more source sectors than previously available and uses the most current emissions factors and industry activity data as input. Additionally, the inventory is provided as annually varying estimates from years 1980-2012 and employs a source specific spatial scaling procedure. We estimate a global source in year 2012 of 406 Gg S y-1 (range of 223-586 Gg S y-1), which is highly concentrated in China and is twice as large as the previous gridded inventory. Our large upward revision in the bottom-up estimate of the source is consistent with a recent top-down estimate based on air-monitoring and Antarctic firn data. Furthermore, our inventory time trends, including a decline in the 1990's and growth after the year 2000, are qualitatively consistent with trends in atmospheric data. Finally, similarities between the spatial distribution in this inventory and remote sensing data suggest that the anthropogenic source could potentially play a role in explaining a missing source in the global COS budget.
Shallow Water Reverberation Measurement and Prediction
1994-06-01
tool . The temporal signal processing consisted of a short-time Fourier transform spectral estimation method applied to data from a single hydrophone...The three-dimensional Hamiltonian Acoustic Ray-tracing Program for the Ocean (HARPO) was used as the primary propagation modeling tool . The temporal...summarizes the work completed and discusses lessons learned . Advice regarding future work to refine the present study will be provided. 6 our poiut source
Borgen, Nicolai T
2014-11-01
This paper addresses the recent discussion on confounding in the returns to college quality literature using the Norwegian case. The main advantage of studying Norway is the quality of the data. Norwegian administrative data provide information on college applications, family relations and a rich set of control variables for all Norwegian citizens applying to college between 1997 and 2004 (N = 141,319) and their succeeding wages between 2003 and 2010 (676,079 person-year observations). With these data, this paper uses a subset of the models that have rendered mixed findings in the literature in order to investigate to what extent confounding biases the returns to college quality. I compare estimates obtained using standard regression models to estimates obtained using the self-revelation model of Dale and Krueger (2002), a sibling fixed effects model and the instrumental variable model used by Long (2008). Using these methods, I consistently find increasing returns to college quality over the course of students' work careers, with positive returns only later in students' work careers. I conclude that the standard regression estimate provides a reasonable estimate of the returns to college quality. Copyright © 2014 Elsevier Inc. All rights reserved.
The volume and mean depth of Earth's lakes
NASA Astrophysics Data System (ADS)
Cael, B. B.; Heathcote, A. J.; Seekell, D. A.
2017-01-01
Global lake volume estimates are scarce, highly variable, and poorly documented. We developed a rigorous method for estimating global lake depth and volume based on the Hurst coefficient of Earth's surface, which provides a mechanistic connection between lake area and volume. Volume-area scaling based on the Hurst coefficient is accurate and consistent when applied to lake data sets spanning diverse regions. We applied these relationships to a global lake area census to estimate global lake volume and depth. The volume of Earth's lakes is 199,000 km3 (95% confidence interval 196,000-202,000 km3). This volume is in the range of historical estimates (166,000-280,000 km3), but the overall mean depth of 41.8 m (95% CI 41.2-42.4 m) is significantly lower than previous estimates (62-151 m). These results highlight and constrain the relative scarcity of lake waters in the hydrosphere and have implications for the role of lakes in global biogeochemical cycles.
Robust w-Estimators for Cryo-EM Class Means.
Huang, Chenxi; Tagare, Hemant D
2016-02-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the class mean, improves the signal-to-noise ratio in single-particle reconstruction. The averaging step is often compromised because of the outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods are done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a w-estimator of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers.
Input-output model for MACCS nuclear accident impacts estimation¹
DOE Office of Scientific and Technical Information (OSTI.GOV)
Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less
An estimate of the number of tropical tree species.
Slik, J W Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L; Bellingham, Peter J; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L M; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K; Chazdon, Robin L; Robin, Chazdon L; Clark, Connie; Clark, David B; Clark, Deborah A; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A O; Eisenlohr, Pedro V; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A; Joly, Carlos A; de Jong, Bernardus H J; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F; Lawes, Michael J; Amaral, Ieda Leao do; Letcher, Susan G; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H; Meilby, Henrik; Melo, Felipe P L; Metcalfe, Daniel J; Medjibe, Vincent P; Metzger, Jean Paul; Millet, Jerome; Mohandass, D; Montero, Juan C; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T F; Pitman, Nigel C A; Poorter, Lourens; Poulsen, Axel D; Poulsen, John; Powers, Jennifer; Prasad, Rama C; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; Dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A; Santos, Fernanda; Sarker, Swapan K; Satdichanh, Manichanh; Schmitt, Christine B; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I-Fang; Sunderland, Terry; Sunderand, Terry; Suresh, H S; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L C H; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A; Webb, Campbell O; Whitfeld, Timothy; Wich, Serge A; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C Yves; Yap, Sandra L; Yoneda, Tsuyoshi; Zahawi, Rakan A; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L; Garcia Luize, Bruno; Venticinque, Eduardo M
2015-06-16
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher's alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼ 40,000 and ∼ 53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼ 19,000-25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼ 4,500-6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa.
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.
1978-01-01
An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.
NASA Astrophysics Data System (ADS)
Camacho, Fernando; Sánchez, Jorge; Lacaze, Roselyne; Weiss, Marie; Baret, Frédéric; Verger, Aleixandre; Smets, Bruno; Latorre, Consuelo
2016-04-01
The Copernicus Global Land Service (http://land.copernicus.eu/global/) is delivering surface biophysical products derived from satellite observations at global scale. Fifteen years of LAI, FAPAR, and vegetation cover (FCOVER) products among other indicators have been generated from SPOT/VGT observations at 1 km spatial resolution (named GEOV1, GEOV2). The continuity of the service since the end of SPOT/VGT mission (May, 2014) is achieved thanks to PROBA-V, which offers observations at a finer spatial resolution (1/3 km). In the context of the FP7 ImagineS project (http://fp7-imagines.eu/), a new algorithm (Weiss et al., this conference), adapted to PROBA-V spectral and spatial characteristics, was designed to provide vegetation products (named GEOV3) as consistent as possible with GEOV1 and GEOV2 whilst providing near real-time estimates required by some users. It is based on neural network techniques completed with a data filtering and smoothing process. The near real-time estimates are improved through a consolidation period of six dekads during which observations are accumulated every new dekad. The validation of these products is mandatory to provide associated uncertainties for efficient use of this source of information. This work presents an early validation over Europe of the GEOV3 LAI, FAPAR and vegetation cover (FCOVER) products derived from PROBA-V observation at 333 m and 10-days frequency during the year 2014. The validation has been conducted in agreement with the CEOS LPV best practices for global LAI products. Several performance criteria were investigated for the several GEOV3 modes (near real-time, and successive consolidated estimates) including completeness, spatial and temporal consistency, precision and accuracy. The spatial and temporal consistency was evaluated using as reference PROBA-V GEOV1 and MODC5 1 km similar products using a network of 153 validation sites over Europe (EUVAL). The accuracy was assessed with concomitant data collected in the ImagineS project over six cropland sites located in Spain, Italy, Ukraine and Tunisia and non-concomitant data over forest sites made available through the CEOS OLIVE cal/val tool. The ground data was estimated from digital hemispherical photography following a well-established protocol over a sampling unit, and then sampling unit values were up-scaled using Landsat-8 imagery and a robust linear regression algorithm. The accuracy was estimated at 333m over regions of 20x20 km2, and at 1 km over areas of 3x3 km2 in order to compare with GEOV1 and MODIS satellite products. Our results show that GEOV3 presents good quality in most of the examined criteria, even if the near real-time estimates show a much lower precision and temporal stability in some biomes. However, after only two dekads the GEOV3 estimate becomes very stable. We observed a slight positive bias at the start of the season in croplands and deciduous forest, mainly, that could be introduced due to the smoothing process. The comparison with ground measurements showed that, overall, the accuracy was good for LAI (RMSE=0.7) and FAPAR (RMSE=0.05) with no bias in the estimates, whilst FCOVER shows a systematic overestimation of about 0.12 units.
A Novel Estimator for the Rate of Information Transfer by Continuous Signals
Takalo, Jouni; Ignatova, Irina; Weckström, Matti; Vähäsöyrinki, Mikko
2011-01-01
The information transfer rate provides an objective and rigorous way to quantify how much information is being transmitted through a communications channel whose input and output consist of time-varying signals. However, current estimators of information content in continuous signals are typically based on assumptions about the system's linearity and signal statistics, or they require prohibitive amounts of data. Here we present a novel information rate estimator without these limitations that is also optimized for computational efficiency. We validate the method with a simulated Gaussian information channel and demonstrate its performance with two example applications. Information transfer between the input and output signals of a nonlinear system is analyzed using a sensory receptor neuron as the model system. Then, a climate data set is analyzed to demonstrate that the method can be applied to a system based on two outputs generated by interrelated random processes. These analyses also demonstrate that the new method offers consistent performance in situations where classical methods fail. In addition to these examples, the method is applicable to a wide range of continuous time series commonly observed in the natural sciences, economics and engineering. PMID:21494562
Rapid and accurate estimation of release conditions in the javelin throw.
Hubbard, M; Alaways, L W
1989-01-01
We have developed a system to measure initial conditions in the javelin throw rapidly enough to be used by the thrower for feedback in performance improvement. The system consists of three subsystems whose main tasks are: (A) acquisition of automatically digitized high speed (200 Hz) video x, y position data for the first 0.1-0.2 s of the javelin flight after release (B) estimation of five javelin release conditions from the x, y position data and (C) graphical presentation to the thrower of these release conditions and a simulation of the subsequent flight together with optimal conditions and flight for the sam release velocity. The estimation scheme relies on a simulation model and is at least an order of magnitude more accurate than previously reported measurements of javelin release conditions. The system provides, for the first time ever in any throwing event, the ability to critique nearly instantly in a precise, quantitative manner the crucial factors in the throw which determine the range. This should be expected to much greater control and consistency of throwing variables by athletes who use system and could even lead to an evolution of new throwing techniques.
Copula-based prediction of economic movements
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.; Hirsh, I. D.
2016-06-01
In this paper we model the discretized returns of two paired time series BM&FBOVESPA Dividend Index and BM&FBOVESPA Public Utilities Index using multivariate Markov models. The discretization corresponds to three categories, high losses, high profits and the complementary periods of the series. In technical terms, the maximal memory that can be considered for a Markov model, can be derived from the size of the alphabet and dataset. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination, of the partitions corresponding to the two marginal processes and the partition corresponding to the multivariate Markov chain. In order to estimate the transition probabilities, all the partitions are linked using a copula. In our application this strategy provides a significant improvement in the movement predictions.
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-08-01
The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.
The Everglades Depth Estimation Network (EDEN) for Support of Ecological and Biological Assessments
Telis, Pamela A.
2006-01-01
The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (1999-present), online water-depth information for the entire freshwater portion of the Greater Everglades. Presented on a 400-square-meter grid spacing, EDEN offers a consistent and documented dataset that can be used by scientists and managers to (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan.
Estimation of arterial baroreflex sensitivity in relation to carotid artery stiffness.
Lipponen, Jukka A; Tarvainen, Mika P; Laitinen, Tomi; Karjalainen, Pasi A; Vanninen, Joonas; Koponen, Timo; Lyyra-Laitinen, Tiina
2012-01-01
Arterial baroreflex has a significant role in regulating blood pressure. It is known that increased stiffness of the carotid sinus affects mecanotransduction of baroreceptors and therefore limits baroreceptors capability to detect changes in blood pressure. By using high resolution ultrasound video signal and continuous measurement of electrocardiogram (ECG) and blood pressure, it is possible to define elastic properties of artery simultaneously with baroreflex sensitivity parameters. In this paper dataset which consist 38 subjects, 11 diabetics and 27 healthy controls was analyzed. Use of diabetic and healthy test subjects gives wide scale of arteries with different elasticity properties, which provide opportunity to validate baroreflex and artery stiffness estimation methods.
NASA Astrophysics Data System (ADS)
Min, Kyoungwon; Farah, Annette E.; Lee, Seung Ryeol; Lee, Jong Ik
2017-01-01
Shock conditions of Martian meteorites provide crucial information about ejection dynamics and original features of the Martian rocks. To better constrain equilibrium shock temperatures (Tequi-shock) of Martian meteorites, we investigated (U-Th)/He systematics of moderately-shocked (Zagami) and intensively shocked (ALHA77005) Martian meteorites. Multiple phosphate aggregates from Zagami and ALHA77005 yielded overall (U-Th)/He ages 92.2 ± 4.4 Ma (2σ) and 8.4 ± 1.2 Ma, respectively. These ages correspond to fractional losses of 0.49 ± 0.03 (Zagami) and 0.97 ± 0.01 (ALHA77005), assuming that the ejection-related shock event at ∼3 Ma is solely responsible for diffusive helium loss since crystallization. For He diffusion modeling, the diffusion domain radius is estimated based on detailed examination of fracture patterns in phosphates using a scanning electron microscope. For Zagami, the diffusion domain radius is estimated to be ∼2-9 μm, which is generally consistent with calculations from isothermal heating experiments (1-4 μm). For ALHA77005, the diffusion domain radius of ∼4-20 μm is estimated. Using the newly constrained (U-Th)/He data, diffusion domain radii, and other previously estimated parameters, the conductive cooling models yield Tequi-shock estimates of 360-410 °C and 460-560 °C for Zagami and ALHA77005, respectively. According to the sensitivity test, the estimated Tequi-shock values are relatively robust to input parameters. The Tequi-shock estimates for Zagami are more robust than those for ALHA77005, primarily because Zagami yielded intermediate fHe value (0.49) compared to ALHA77005 (0.97). For less intensively shocked Zagami, the He diffusion-based Tequi-shock estimates (this study) are significantly higher than expected from previously reported Tpost-shock values. For intensively shocked ALHA77005, the two independent approaches yielded generally consistent results. Using two other examples of previously studied Martian meteorites (ALHA84001 and Los Angeles), we compared Tequi-shock and Tpost-shock estimates. For intensively shocked meteorites (ALHA77005, Los Angeles), the He diffusion-based approach yield slightly higher or consistent Tequi-shock with estimations from Tpost-shock, and the discrepancy between the two methods increases as the intensity of shock increases. The reason for the discrepancy between the two methods, particularly for less-intensively shocked meteorites (Zagami, ALHA84001), remains to be resolved, but we prefer the He diffusion-based approach because its Tequi-shock estimates are relatively robust to input parameters.
Global observations of tropospheric BrO columns using GOME-2 satellite data
NASA Astrophysics Data System (ADS)
Theys, N.; van Roozendael, M.; Hendrick, F.; Yang, X.; de Smedt, I.; Richter, A.; Begoin, M.; Errera, Q.; Johnston, P. V.; Kreher, K.; de Mazière, M.
2010-11-01
Measurements from the GOME-2 satellite instrument have been analyzed for tropospheric BrO using a residual technique that combines measured BrO columns and estimates of the stratospheric BrO content from a climatological approach driven by O3 and NO2 observations. Comparisons between the GOME-2 results and BrO vertical columns derived from correlative ground-based and SCIAMACHY nadir observations, present a good level of consistency. We show that the adopted technique enables separation of stratospheric and tropospheric fractions of the measured total BrO columns and allows quantitative study of the BrO plumes in polar regions. While some satellite observed plumes of enhanced BrO can be explained by stratospheric descending air, we show that most BrO hotspots are of tropospheric origin, although they are often associated to regions with low tropopause heights as well. Elaborating on simulations using the p-TOMCAT tropospheric chemical transport model, this result is found to be consistent with the mechanism of bromine release through sea salt aerosols production during blowing snow events. Outside polar regions, evidence is provided for a global tropospheric BrO background with column of 1-3×1013 molec/cm2, consistent with previous estimates.
Analytic Intermodel Consistent Modeling of Volumetric Human Lung Dynamics.
Ilegbusi, Olusegun; Seyfi, Behnaz; Neylon, John; Santhanam, Anand P
2015-10-01
Human lung undergoes breathing-induced deformation in the form of inhalation and exhalation. Modeling the dynamics is numerically complicated by the lack of information on lung elastic behavior and fluid-structure interactions between air and the tissue. A mathematical method is developed to integrate deformation results from a deformable image registration (DIR) and physics-based modeling approaches in order to represent consistent volumetric lung dynamics. The computational fluid dynamics (CFD) simulation assumes the lung is a poro-elastic medium with spatially distributed elastic property. Simulation is performed on a 3D lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of a human subject. The heterogeneous Young's modulus (YM) is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The deformation obtained from the CFD is then coupled with the displacement obtained from the 4D lung DIR by means of the Tikhonov regularization (TR) algorithm. The numerical results include 4DCT registration, CFD, and optimal displacement data which collectively provide consistent estimate of the volumetric lung dynamics. The fusion method is validated by comparing the optimal displacement with the results obtained from the 4DCT registration.
A Nonparametric Approach to Estimate Classification Accuracy and Consistency
ERIC Educational Resources Information Center
Lathrop, Quinn N.; Cheng, Ying
2014-01-01
When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…
An evaluation of methods for estimating decadal stream loads
NASA Astrophysics Data System (ADS)
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
Guthrie Zimmerman,; Sauer, John; Fleming, Kathy; Link, William; Pamela R. Garrettson,
2015-01-01
We combined data from the Atlantic Flyway Breeding Waterfowl Survey (AFBWS) and the North American Breeding Bird Survey (BBS) to estimate the number of wood ducks (Aix sponsa) in the United States portion of the Atlantic Flyway from 1993 to 2013. The AFBWS is a plot-based survey that covers most of the northern and central portions of the Flyway; when analyzed with adjustments for survey time of day effects, these data can be used to estimate population size. The BBS provides an index of wood duck abundance along roadside routes. Although factors influencing change in BBS counts over time can be controlled in BBS analysis, BBS indices alone cannot be used to derive population size estimates. We used AFBWS data to scale BBS indices for Bird Conservation Regions (BCR), basing the scaling factors on the ratio of estimated AFBWS population sizes to regional BBS indices for portions of BCRs that were common to both surveys. We summed scaled BBS results for portions of the Flyway not covered by the AFBWS with AFBWS population estimates to estimate a mean yearly total of 1,295,875 (mean 95% CI: 1,013,940–1,727,922) wood ducks. Scaling factors varied among BCRs from 16.7 to 148.0; the mean scaling factor was 68.9 (mean 95% CI: 53.5–90.9). Flyway-wide, population estimates from the combined analysis were consistent with alternative estimates derived from harvest data, and also provide population estimates within states and BCRs. We recommend their use in harvest and habitat management within the Atlantic Flyway.
An evaluation of methods for estimating decadal stream loads
Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-01-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
Carbon Fluxes at the AmazonFACE Research Site
NASA Astrophysics Data System (ADS)
Norby, R.; De Araujo, A. C.; Cordeiro, A. L.; Fleischer, K.; Fuchslueger, L.; Garcia, S.; Hofhansl, F.; Garcia, M. N.; Grandis, A.; Oblitas, E.; Pereira, I.; Pieres, N. M.; Schaap, K.; Valverde-Barrantes, O.
2017-12-01
The free-air CO2 enrichment (FACE) experiment to be implemented in the Amazon rain forest requires strong pretreatment characterization so that eventual responses to elevated CO2 can be detected against a background of substantial species diversity and spatial heterogeneity. Two 30-m diameter plots have been laid out for initial characterization in a 30-m tall, old-growth, terra firme forest. Intensive measurements have been made of aboveground tree growth, leaf area, litter production, and fine-root production; these data sets together support initial estimates of plot-scale net primary productivity (NPP). Leaf-level measurements of photosynthesis throughout the canopy and over a daily time course in both the wet and dry season, coupled with meterological monitoring, support an initial estimate of gross primary productivity (GPP) and carbon-use efficiency (CUE = NPP/GPP). Monthly monitoring of CO2 efflux from the soil, partitioned into autotrophic and heterotrophic components, supports an estimate of net ecosystem production (NEP). Our estimate of NPP in the two plots (1.2 and 1.4 kg C m-2 yr-1) is 16-38% greater than previously reported for the site, primarily due to our more complete documentation of fine-root production, including root production deeper than 30 cm. The estimate of CUE of the ecosystem (0.52) is greater than most others in Amazonia; this discrepancy reflects large uncertainty in GPP, which derived from just two days of measurement, or to underestimates of the fine-root component of NPP in previous studies. Estimates of NEP (0 and 0.14 kg C m-2 yr-1) are generally consistent with a landscape-level estimate from flux tower data. Our C flux estimates, albeit very preliminary, provide initial benchmarks for a 12-model a priori evaluation of this forest. The model means of GPP, NPP, and NEP are mostly consistent with our field measurements. Predictions of C flux responses to elevated CO2 from the models become hypotheses to be tested in the FACE experiment. Although carbon fluxes on small plots cannot be expected to represent the fluxes across the wider and more diverse region, our integrated measurements, coupled with a model framework, provide a strong foundation for understanding the mechanistic basis of responses and for extending results of experimental CO2 fertilization to the wider region.
The crustal thickness of West Antarctica
NASA Astrophysics Data System (ADS)
Chaput, J.; Aster, R. C.; Huerta, A.; Sun, X.; Lloyd, A.; Wiens, D.; Nyblade, A.; Anandakrishnan, S.; Winberry, J. P.; Wilson, T.
2014-01-01
P-to-S receiver functions (PRFs) from the Polar Earth Observing Network (POLENET) GPS and seismic leg of POLENET spanning West Antarctica and the Transantarctic Mountains deployment of seismographic stations provide new estimates of crustal thickness across West Antarctica, including the West Antarctic Rift System (WARS), Marie Byrd Land (MBL) dome, and the Transantarctic Mountains (TAM) margin. We show that complications arising from ice sheet multiples can be effectively managed and further information concerning low-velocity subglacial sediment thickness may be determined, via top-down utilization of synthetic receiver function models. We combine shallow structure constraints with the response of deeper layers using a regularized Markov chain Monte Carlo methodology to constrain bulk crustal properties. Crustal thickness estimates range from 17.0±4 km at Fishtail Point in the western WARS to 45±5 km at Lonewolf Nunataks in the TAM. Symmetric regions of crustal thinning observed in a transect deployment across the West Antarctic Ice Sheet correlate with deep subice basins, consistent with pure shear crustal necking under past localized extension. Subglacial sediment deposit thicknesses generally correlate with trough/dome expectations, with the thickest inferred subice low-velocity sediment estimated as ˜0.4 km within the Bentley Subglacial Trench. Inverted PRFs from this study and other published crustal estimates are combined with ambient noise surface wave constraints to generate a crustal thickness map for West Antarctica south of 75°S. Observations are consistent with isostatic crustal compensation across the central WARS but indicate significant mantle compensation across the TAM, Ellsworth Block, MBL dome, and eastern and western sectors of thinnest WARS crust, consistent with low density and likely dynamic, low-viscosity high-temperature mantle.
2017-01-01
The U.S. Energy Information Administration's Short-Term Energy Outlook (STEO) produces monthly projections of energy supply, demand, trade, and prices over a 13-24 month period. Every January, the forecast horizon is extended through December of the following year. The STEO model is an integrated system of econometric regression equations and identities that link data on the various components of the U.S. energy industry together in order to develop consistent forecasts. The regression equations are estimated and the STEO model is solved using the EViews 9.5 econometric software package from IHS Global Inc. The model consists of various modules specific to each energy resource. All modules provide projections for the United States, and some modules provide more detailed forecasts for different regions of the country.
An Extensive Unified Thermo-Electric Module Characterization Method
Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio
2016-01-01
Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios. PMID:27983575
An Extensive Unified Thermo-Electric Module Characterization Method.
Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio
2016-12-13
Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios.
Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.
Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo
2012-01-01
The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.
Estimates of the Internal Consistency of a Factorially Complex Composite.
ERIC Educational Resources Information Center
Benito, Juana Gomez
1989-01-01
This study of 852 subjects in Barcelona (Spain) between 4 and 9 years old estimated the degree of consistency among elements of the Borelli-Oleron Performance Scale by taking into account item clusters and subtest clusters. The internal consistency of the subtests rose when all ages were analyzed jointly. (SLD)
THE SPLASH SURVEY: SPECTROSCOPY OF 15 M31 DWARF SPHEROIDAL SATELLITE GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tollerud, Erik J.; Bullock, James S.; Yniguez, Basilio
2012-06-10
We present a resolved star spectroscopic survey of 15 dwarf spheroidal (dSph) satellites of the Andromeda galaxy (M31). We filter foreground contamination from Milky Way (MW) stars, noting that MW substructure is evident in this contaminant sample. We also filter M31 halo field giant stars and identify the remainder as probable dSph members. We then use these members to determine the kinematical properties of the dSphs. For the first time, we confirm that And XVIII, XXI, and XXII show kinematics consistent with bound, dark-matter-dominated galaxies. From the velocity dispersions for the full sample of dSphs we determine masses, which wemore » combine with the size and luminosity of the galaxies to produce mass-size-luminosity scaling relations. With these scalings we determine that the M31 dSphs are fully consistent with the MW dSphs, suggesting that the well-studied MW satellite population provides a fair sample for broader conclusions. We also estimate dark matter halo masses of the satellites and find that there is no sign that the luminosity of these galaxies depends on their dark halo mass, a result consistent with what is seen for MW dwarfs. Two of the M31 dSphs (And XV, XVI) have estimated maximum circular velocities smaller than 12 km s{sup -1} (to 1{sigma}), which likely places them within the lowest-mass dark matter halos known to host stars (along with Booetes I of the MW). Finally, we use the systemic velocities of the M31 satellites to estimate the mass of the M31 halo, obtaining a virial mass consistent with previous results.« less
Shared Dosimetry Error in Epidemiological Dose-Response Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail
2015-03-23
Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of "possible" dose history to workers given dose determinants. This paper takesmore » up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. Use of these methods for several studies, including the Mayak Worker Cohort and the U.S. Atomic Veterans Study, is discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Robert N.; Urban, Marie L.; Duchscherer, Samantha E.
Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the artmore » by introducing the Population Data Tables (PDT), a Bayesian based informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000ft2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the art by introducing the Population Data Tables (PDT), a Bayesian model and informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000 ft 2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.« less
NASA Astrophysics Data System (ADS)
Bothe, Oliver; Wagner, Sebastian; Zorita, Eduardo
2015-04-01
How did regional precipitation change in past centuries? We have potentially three sources of information to answer this question: There are, especially for Europe, a number of long records of local station precipitation; documentary records and natural archives of past environmental variability serve as proxy records for empirical reconstructions; in addition, simulations with coupled climate models or Earth System Models provide estimates on the spatial structure of precipitation variability. However, instrumental records rarely extend back to the 18th century, reconstructions include large uncertainties, and simulation skill is often still unsatisfactory for precipitation. Thus, we can only seek to answer to which extent the three sources provide a consistent picture of past regional precipitation changes. This presentation describes the (lack of) consistency in describing changes of the distributional properties of seasonal precipitation between the different data sources. We concentrate on England and Wales since there are two recent reconstructions and a long observation based record available for this domain. The season of interest is an extended spring (March, April, May, June, July, MAMJJ) over the past 350 years. The main simulated data stem from a regional simulation for the European domain with CCLM driven at its lateral boundaries with conditions provided by a MPI-ESM COSMOS simulation for the last millennium using a high-amplitude solar forcing. A number of simulations for the past 1000 years from the Paleoclimate Modelling Intercomparison Project Phase III provide additional information. We fit a Weibull distribution to the available data sets following the approach for calculating standardized precipitation indices. We do so over 51 year moving windows to assess the consistency of changes in the distributional properties. Changes in the percentiles for severe (and extreme) dry or wet conditions and in the Weibull standard deviations of precipitation estimates are generally not consistent among the different data sets. Only few common signals are evident. Even the relatively strong exogenous forcing history of the late 18th and early 19th century appears to have only small effects on the precipitation distributions. The reconstructions differ systematically from the long instrumental data in displaying much stronger variability compared to the observations over their common period. Distributional properties for both data sets show to some extent opposite evolutions. The reconstructions do not reliably represent the distributions in specific periods but rather reflect low-frequency changes in the mean plus a certain amount of noise. Moreover, also multi-model simulations do not agree on the changes over this period. The lack of consistent simulated relations under purely naturally forced and internal variability on multi-decadal time-scales therefore questions our ability to conclude on dynamical inferences about regional climate variability in the PMIP3 ensemble and, in turn, in climate simulations in general. The potentially opposite evolution of reconstructions and instrumental data for the chosen domain further hampers reconciling available information about past regional precipitation variability in England and Wales. However, we find some possibly surprising but encouraging agreement between the observed data and the regional simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saito, N.; Suzuki, I. H.; Onuki, H.
1989-07-01
Optical characteristics of a new beamline consisting of a premirror, a Grasshopper monochromator, and a refocusing mirror have been investigated. The intensity of the monochromatic soft x-ray was estimated to be about 10/sup 8/ photons/(s 100 mA) at 500 eV with the storage electron energy of 600 MeV and the minimum slit width. This slit width provides a resolution of about 500. Angular distributions of fragment ions from an inner-shell excited nitrogen molecule have been measured with a rotatable time-of-flight mass spectrometer by using this beamline.
Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia
2017-10-01
A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.
Loganathan, Tharani; Ng, Chiu-Wan; Lee, Way-Seah; Jit, Mark
2016-06-01
Rotavirus gastroenteritis (RVGE) results in substantial mortality and morbidity worldwide. However, an accurate estimation of the health and economic burden of RVGE in Malaysia covering public, private and home treatment is lacking. Data from multiple sources were used to estimate diarrheal mortality and morbidity according to health service utilization. The proportion of this burden attributable to rotavirus was estimated from a community-based study and a meta-analysis we conducted of primary hospital-based studies. Rotavirus incidence was determined by multiplying acute gastroenteritis incidence with estimates of the proportion of gastroenteritis attributable to rotavirus. The economic burden of rotavirus disease was estimated from the health systems and societal perspective. Annually, rotavirus results in 27 deaths, 31,000 hospitalizations, 41,000 outpatient visits and 145,000 episodes of home-treated gastroenteritis in Malaysia. We estimate an annual rotavirus incidence of 1 death per 100,000 children and 12 hospitalizations, 16 outpatient clinic visits and 57 home-treated episodes per 1000 children under-5 years. Annually, RVGE is estimated to cost US$ 34 million to the healthcare provider and US$ 50 million to society. Productivity loss contributes almost a third of costs to society. Publicly, privately and home-treated episodes consist of 52%, 27% and 21%, respectively, of the total societal costs. RVGE represents a considerable health and economic burden in Malaysia. Much of the burden lies in privately or home-treated episodes and is poorly captured in previous studies. This study provides vital information for future evaluation of cost-effectiveness, which are necessary for policy-making regarding universal vaccination.
Wiggins, Lisa; Christensen, Deborah L.; Maenner, Matthew J; Daniels, Julie; Warren, Zachary; Kurzius-Spencer, Margaret; Zahorodny, Walter; Robinson Rosenberg, Cordelia; White, Tiffany; Durkin, Maureen S.; Imm, Pamela; Nikolaou, Loizos; Yeargin-Allsopp, Marshalyn; Lee, Li-Ching; Harrington, Rebecca; Lopez, Maya; Fitzgerald, Robert T.; Hewitt, Amy; Pettygrove, Sydney; Constantino, John N.; Vehorn, Alison; Shenouda, Josephine; Hall-Lande, Jennifer; Van Naarden Braun, Kim; Dowling, Nicole F.
2018-01-01
Problem/Condition Autism spectrum disorder (ASD). Period Covered 2014. Description of System The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active surveillance system that provides estimates of the prevalence of autism spectrum disorder (ASD) among children aged 8 years whose parents or guardians reside within 11 ADDM sites in the United States (Arizona, Arkansas, Colorado, Georgia, Maryland, Minnesota, Missouri, New Jersey, North Carolina, Tennessee, and Wisconsin). ADDM surveillance is conducted in two phases. The first phase involves review and abstraction of comprehensive evaluations that were completed by professional service providers in the community. Staff completing record review and abstraction receive extensive training and supervision and are evaluated according to strict reliability standards to certify effective initial training, identify ongoing training needs, and ensure adherence to the prescribed methodology. Record review and abstraction occurs in a variety of data sources ranging from general pediatric health clinics to specialized programs serving children with developmental disabilities. In addition, most of the ADDM sites also review records for children who have received special education services in public schools. In the second phase of the study, all abstracted information is reviewed systematically by experienced clinicians to determine ASD case status. A child is considered to meet the surveillance case definition for ASD if he or she displays behaviors, as described on one or more comprehensive evaluations completed by community-based professional providers, consistent with the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) diagnostic criteria for autistic disorder; pervasive developmental disorder–not otherwise specified (PDD-NOS, including atypical autism); or Asperger disorder. This report provides updated ASD prevalence estimates for children aged 8 years during the 2014 surveillance year, on the basis of DSM-IV-TR criteria, and describes characteristics of the population of children with ASD. In 2013, the American Psychiatric Association published the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), which made considerable changes to ASD diagnostic criteria. The change in ASD diagnostic criteria might influence ADDM ASD prevalence estimates; therefore, most (85%) of the records used to determine prevalence estimates based on DSM-IV-TR criteria underwent additional review under a newly operationalized surveillance case definition for ASD consistent with the DSM-5 diagnostic criteria. Children meeting this new surveillance case definition could qualify on the basis of one or both of the following criteria, as documented in abstracted comprehensive evaluations: 1) behaviors consistent with the DSM-5 diagnostic features; and/or 2) an ASD diagnosis, whether based on DSM-IV-TR or DSM-5 diagnostic criteria. Stratified comparisons of the number of children meeting either of these two case definitions also are reported. Results For 2014, the overall prevalence of ASD among the 11 ADDM sites was 16.8 per 1,000 (one in 59) children aged 8 years. Overall ASD prevalence estimates varied among sites, from 13.1–29.3 per 1,000 children aged 8 years. ASD prevalence estimates also varied by sex and race/ethnicity. Males were four times more likely than females to be identified with ASD. Prevalence estimates were higher for non-Hispanic white (henceforth, white) children compared with non-Hispanic black (henceforth, black) children, and both groups were more likely to be identified with ASD compared with Hispanic children. Among the nine sites with sufficient data on intellectual ability, 31% of children with ASD were classified in the range of intellectual disability (intelligence quotient [IQ] <70), 25% were in the borderline range (IQ 71–85), and 44% had IQ scores in the average to above average range (i.e., IQ >85). The distribution of intellectual ability varied by sex and race/ethnicity. Although mention of developmental concerns by age 36 months was documented for 85% of children with ASD, only 42% had a comprehensive evaluation on record by age 36 months. The median age of earliest known ASD diagnosis was 52 months and did not differ significantly by sex or race/ethnicity. For the targeted comparison of DSM-IV-TR and DSM-5 results, the number and characteristics of children meeting the newly operationalized DSM-5 case definition for ASD were similar to those meeting the DSM-IV-TR case definition, with DSM-IV-TR case counts exceeding DSM-5 counts by less than 5% and approximately 86% overlap between the two case definitions (kappa = 0.85). Interpretation Findings from the ADDM Network, on the basis of 2014 data reported from 11 sites, provide updated population-based estimates of the prevalence of ASD among children aged 8 years in multiple communities in the United States. The overall ASD prevalence estimate of 16.8 per 1,000 children aged 8 years in 2014 is higher than previously reported estimates from the ADDM Network. Because the ADDM sites do not provide a representative sample of the entire United States, the combined prevalence estimates presented in this report cannot be generalized to all children aged 8 years in the United States. Consistent with reports from previous ADDM surveillance years, findings from 2014 were marked by variation in ASD prevalence when stratified by geographic area, sex, and level of intellectual ability. Differences in prevalence estimates between black and white children have diminished in most sites, but remained notable for Hispanic children. For 2014, results from application of the DSM-IV-TR and DSM-5 case definitions were similar, overall and when stratified by sex, race/ethnicity, DSM-IV-TR diagnostic subtype, or level of intellectual ability. Public Health Action Beginning with surveillance year 2016, the DSM-5 case definition will serve as the basis for ADDM estimates of ASD prevalence in future surveillance reports. Although the DSM-IV-TR case definition will eventually be phased out, it will be applied in a limited geographic area to offer additional data for comparison. Future analyses will examine trends in the continued use of DSM-IV-TR diagnoses, such as autistic disorder, PDD-NOS, and Asperger disorder in health and education records, documentation of symptoms consistent with DSM-5 terminology, and how these trends might influence estimates of ASD prevalence over time. The latest findings from the ADDM Network provide evidence that the prevalence of ASD is higher than previously reported estimates and continues to vary among certain racial/ethnic groups and communities. With prevalence of ASD ranging from 13.1 to 29.3 per 1,000 children aged 8 years in different communities throughout the United States, the need for behavioral, educational, residential, and occupational services remains high, as does the need for increased research on both genetic and nongenetic risk factors for ASD. PMID:29701730
Baio, Jon; Wiggins, Lisa; Christensen, Deborah L; Maenner, Matthew J; Daniels, Julie; Warren, Zachary; Kurzius-Spencer, Margaret; Zahorodny, Walter; Robinson Rosenberg, Cordelia; White, Tiffany; Durkin, Maureen S; Imm, Pamela; Nikolaou, Loizos; Yeargin-Allsopp, Marshalyn; Lee, Li-Ching; Harrington, Rebecca; Lopez, Maya; Fitzgerald, Robert T; Hewitt, Amy; Pettygrove, Sydney; Constantino, John N; Vehorn, Alison; Shenouda, Josephine; Hall-Lande, Jennifer; Van Naarden Braun, Kim; Dowling, Nicole F
2018-04-27
Autism spectrum disorder (ASD). 2014. The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active surveillance system that provides estimates of the prevalence of autism spectrum disorder (ASD) among children aged 8 years whose parents or guardians reside within 11 ADDM sites in the United States (Arizona, Arkansas, Colorado, Georgia, Maryland, Minnesota, Missouri, New Jersey, North Carolina, Tennessee, and Wisconsin). ADDM surveillance is conducted in two phases. The first phase involves review and abstraction of comprehensive evaluations that were completed by professional service providers in the community. Staff completing record review and abstraction receive extensive training and supervision and are evaluated according to strict reliability standards to certify effective initial training, identify ongoing training needs, and ensure adherence to the prescribed methodology. Record review and abstraction occurs in a variety of data sources ranging from general pediatric health clinics to specialized programs serving children with developmental disabilities. In addition, most of the ADDM sites also review records for children who have received special education services in public schools. In the second phase of the study, all abstracted information is reviewed systematically by experienced clinicians to determine ASD case status. A child is considered to meet the surveillance case definition for ASD if he or she displays behaviors, as described on one or more comprehensive evaluations completed by community-based professional providers, consistent with the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) diagnostic criteria for autistic disorder; pervasive developmental disorder-not otherwise specified (PDD-NOS, including atypical autism); or Asperger disorder. This report provides updated ASD prevalence estimates for children aged 8 years during the 2014 surveillance year, on the basis of DSM-IV-TR criteria, and describes characteristics of the population of children with ASD. In 2013, the American Psychiatric Association published the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), which made considerable changes to ASD diagnostic criteria. The change in ASD diagnostic criteria might influence ADDM ASD prevalence estimates; therefore, most (85%) of the records used to determine prevalence estimates based on DSM-IV-TR criteria underwent additional review under a newly operationalized surveillance case definition for ASD consistent with the DSM-5 diagnostic criteria. Children meeting this new surveillance case definition could qualify on the basis of one or both of the following criteria, as documented in abstracted comprehensive evaluations: 1) behaviors consistent with the DSM-5 diagnostic features; and/or 2) an ASD diagnosis, whether based on DSM-IV-TR or DSM-5 diagnostic criteria. Stratified comparisons of the number of children meeting either of these two case definitions also are reported. For 2014, the overall prevalence of ASD among the 11 ADDM sites was 16.8 per 1,000 (one in 59) children aged 8 years. Overall ASD prevalence estimates varied among sites, from 13.1-29.3 per 1,000 children aged 8 years. ASD prevalence estimates also varied by sex and race/ethnicity. Males were four times more likely than females to be identified with ASD. Prevalence estimates were higher for non-Hispanic white (henceforth, white) children compared with non-Hispanic black (henceforth, black) children, and both groups were more likely to be identified with ASD compared with Hispanic children. Among the nine sites with sufficient data on intellectual ability, 31% of children with ASD were classified in the range of intellectual disability (intelligence quotient [IQ] <70), 25% were in the borderline range (IQ 71-85), and 44% had IQ scores in the average to above average range (i.e., IQ >85). The distribution of intellectual ability varied by sex and race/ethnicity. Although mention of developmental concerns by age 36 months was documented for 85% of children with ASD, only 42% had a comprehensive evaluation on record by age 36 months. The median age of earliest known ASD diagnosis was 52 months and did not differ significantly by sex or race/ethnicity. For the targeted comparison of DSM-IV-TR and DSM-5 results, the number and characteristics of children meeting the newly operationalized DSM-5 case definition for ASD were similar to those meeting the DSM-IV-TR case definition, with DSM-IV-TR case counts exceeding DSM-5 counts by less than 5% and approximately 86% overlap between the two case definitions (kappa = 0.85). Findings from the ADDM Network, on the basis of 2014 data reported from 11 sites, provide updated population-based estimates of the prevalence of ASD among children aged 8 years in multiple communities in the United States. The overall ASD prevalence estimate of 16.8 per 1,000 children aged 8 years in 2014 is higher than previously reported estimates from the ADDM Network. Because the ADDM sites do not provide a representative sample of the entire United States, the combined prevalence estimates presented in this report cannot be generalized to all children aged 8 years in the United States. Consistent with reports from previous ADDM surveillance years, findings from 2014 were marked by variation in ASD prevalence when stratified by geographic area, sex, and level of intellectual ability. Differences in prevalence estimates between black and white children have diminished in most sites, but remained notable for Hispanic children. For 2014, results from application of the DSM-IV-TR and DSM-5 case definitions were similar, overall and when stratified by sex, race/ethnicity, DSM-IV-TR diagnostic subtype, or level of intellectual ability. Beginning with surveillance year 2016, the DSM-5 case definition will serve as the basis for ADDM estimates of ASD prevalence in future surveillance reports. Although the DSM-IV-TR case definition will eventually be phased out, it will be applied in a limited geographic area to offer additional data for comparison. Future analyses will examine trends in the continued use of DSM-IV-TR diagnoses, such as autistic disorder, PDD-NOS, and Asperger disorder in health and education records, documentation of symptoms consistent with DSM-5 terminology, and how these trends might influence estimates of ASD prevalence over time. The latest findings from the ADDM Network provide evidence that the prevalence of ASD is higher than previously reported estimates and continues to vary among certain racial/ethnic groups and communities. With prevalence of ASD ranging from 13.1 to 29.3 per 1,000 children aged 8 years in different communities throughout the United States, the need for behavioral, educational, residential, and occupational services remains high, as does the need for increased research on both genetic and nongenetic risk factors for ASD.
A contact-force regulated photoplethysmography (PPG) platform
NASA Astrophysics Data System (ADS)
Sim, Jai Kyoung; Ahn, Bongyoung; Doh, Il
2018-04-01
A photoplethysmography (PPG) platform integrated with a miniaturized force-regulator is proposed in this study. Because a thermo-pneumatic type regulator maintains a consistent contact-force between the PPG probe and the measuring site, a consistent and stable PPG signal can be obtained. We designed and fabricated a watch-type PPG platform with an overall size of 35 mm × 19 mm. In the PPG measurement on the radial artery wrist while posture of the wrist is changed to extension, neutral, or flexion, regulation of the contact-force provides consistent PPG measurements for which the variations in the PPG amplitude (PPGA) was 7.2 %. The proposed PPG platform can be applied to biosignal measurements in various fields such as PPG-based ANS monitoring to estimate nociception, sleep apnea syndrome, and psychological stress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldhoff, Stephanie T.; Martinich, Jeremy; Sarofim, Marcus
2015-07-01
The Climate Change Impacts and Risk Analysis (CIRA) modeling exercise is a unique contribution to the scientific literature on climate change impacts, economic damages, and risk analysis that brings together multiple, national-scale models of impacts and damages in an integrated and consistent fashion to estimate climate change impacts, damages, and the benefits of greenhouse gas (GHG) mitigation actions in the United States. The CIRA project uses three consistent socioeconomic, emissions, and climate scenarios across all models to estimate the benefits of GHG mitigation policies: a Business As Usual (BAU) and two policy scenarios with radiative forcing (RF) stabilization targets ofmore » 4.5 W/m2 and 3.7 W/m2 in 2100. CIRA was also designed to specifically examine the sensitivity of results to uncertainties around climate sensitivity and differences in model structure. The goals of CIRA project are to 1) build a multi-model framework to produce estimates of multiple risks and impacts in the U.S., 2) determine to what degree risks and damages across sectors may be lowered from a BAU to policy scenarios, 3) evaluate key sources of uncertainty along the causal chain, and 4) provide information for multiple audiences and clearly communicate the risks and damages of climate change and the potential benefits of mitigation. This paper describes the motivations, goals, and design of the CIRA modeling exercise and introduces the subsequent papers in this special issue.« less
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
Planetary Probe Entry Atmosphere Estimation Using Synthetic Air Data System
NASA Technical Reports Server (NTRS)
Karlgaard, Chris; Schoenenberger, Mark
2017-01-01
This paper develops an atmospheric state estimator based on inertial acceleration and angular rate measurements combined with an assumed vehicle aerodynamic model. The approach utilizes the full navigation state of the vehicle (position, velocity, and attitude) to recast the vehicle aerodynamic model to be a function solely of the atmospheric state (density, pressure, and winds). Force and moment measurements are based on vehicle sensed accelerations and angular rates. These measurements are combined with an aerodynamic model and a Kalman-Schmidt filter to estimate the atmospheric conditions. The new method is applied to data from the Mars Science Laboratory mission, which landed the Curiosity rover on the surface of Mars in August 2012. The results of the new estimation algorithm are compared with results from a Flush Air Data Sensing algorithm based on onboard pressure measurements on the vehicle forebody. The comparison indicates that the new proposed estimation method provides estimates consistent with the air data measurements, without the use of pressure measurements. Implications for future missions such as the Mars 2020 entry capsule are described.
On land-use modeling: A treatise of satellite imagery data and misclassification error
NASA Astrophysics Data System (ADS)
Sandler, Austin M.
Recent availability of satellite-based land-use data sets, including data sets with contiguous spatial coverage over large areas, relatively long temporal coverage, and fine-scale land cover classifications, is providing new opportunities for land-use research. However, care must be used when working with these datasets due to misclassification error, which causes inconsistent parameter estimates in the discrete choice models typically used to model land-use. I therefore adapt the empirical correction methods developed for other contexts (e.g., epidemiology) so that they can be applied to land-use modeling. I then use a Monte Carlo simulation, and an empirical application using actual satellite imagery data from the Northern Great Plains, to compare the results of a traditional model ignoring misclassification to those from models accounting for misclassification. Results from both the simulation and application indicate that ignoring misclassification will lead to biased results. Even seemingly insignificant levels of misclassification error (e.g., 1%) result in biased parameter estimates, which alter marginal effects enough to affect policy inference. At the levels of misclassification typical in current satellite imagery datasets (e.g., as high as 35%), ignoring misclassification can lead to systematically erroneous land-use probabilities and substantially biased marginal effects. The correction methods I propose, however, generate consistent parameter estimates and therefore consistent estimates of marginal effects and predicted land-use probabilities.
Mental Disorder Symptoms among Public Safety Personnel in Canada.
Carleton, R Nicholas; Afifi, Tracie O; Turner, Sarah; Taillieu, Tamara; Duranceau, Sophie; LeBouthillier, Daniel M; Sareen, Jitender; Ricciardelli, Rose; MacPhee, Renee S; Groll, Dianne; Hozempa, Kadie; Brunet, Alain; Weekes, John R; Griffiths, Curt T; Abrams, Kelly J; Jones, Nicholas A; Beshai, Shadi; Cramm, Heidi A; Dobson, Keith S; Hatcher, Simon; Keane, Terence M; Stewart, Sherry H; Asmundson, Gordon J G
2018-01-01
Canadian public safety personnel (PSP; e.g., correctional workers, dispatchers, firefighters, paramedics, police officers) are exposed to potentially traumatic events as a function of their work. Such exposures contribute to the risk of developing clinically significant symptoms related to mental disorders. The current study was designed to provide estimates of mental disorder symptom frequencies and severities for Canadian PSP. An online survey was made available in English or French from September 2016 to January 2017. The survey assessed current symptoms, and participation was solicited from national PSP agencies and advocacy groups. Estimates were derived using well-validated screening measures. There were 5813 participants (32.5% women) who were grouped into 6 categories (i.e., call center operators/dispatchers, correctional workers, firefighters, municipal/provincial police, paramedics, Royal Canadian Mounted Police). Substantial proportions of participants reported current symptoms consistent with 1 (i.e., 15.1%) or more (i.e., 26.7%) mental disorders based on the screening measures. There were significant differences across PSP categories with respect to proportions screening positive based on each measure. The estimated proportion of PSP reporting current symptom clusters consistent with 1 or more mental disorders appears higher than previously published estimates for the general population; however, direct comparisons are impossible because of methodological differences. The available data suggest that Canadian PSP experience substantial and heterogeneous difficulties with mental health and underscore the need for a rigorous epidemiologic study and category-specific solutions.
NASA Astrophysics Data System (ADS)
Martinez, Guillermo F.; Gupta, Hoshin V.
2011-12-01
Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.
Current profile redistribution driven by neutral beam injection in a reversed-field pinch
NASA Astrophysics Data System (ADS)
Parke, E.; Anderson, J. K.; Brower, D. L.; Den Hartog, D. J.; Ding, W. X.; Johnson, C. A.; Lin, L.
2016-05-01
Neutral beam injection in reversed-field pinch (RFP) plasmas on the Madison Symmetric Torus [Dexter et al., Fusion Sci. Technol. 19, 131 (1991)] drives current redistribution with increased on-axis current density but negligible net current drive. Internal fluctuations correlated with tearing modes are observed on multiple diagnostics; the behavior of tearing mode correlated structures is consistent with flattening of the safety factor profile. The first application of a parametrized model for island flattening to temperature fluctuations in an RFP allows inferrence of rational surface locations for multiple tearing modes. The m = 1, n = 6 mode is observed to shift inward by 1.1 ± 0.6 cm with neutral beam injection. Tearing mode rational surface measurements provide a strong constraint for equilibrium reconstruction, with an estimated reduction of q0 by 5% and an increase in on-axis current density of 8% ± 5%. The inferred on-axis current drive is consistent with estimates of fast ion density using TRANSP [Goldston et al., J. Comput. Phys. 43, 61 (1981)].
King, Timothy L.; Johnson, Robin L.
2011-01-01
We document the isolation and characterization of 19 tetra-nucleotide microsatellite DNA markers in northern snakehead (Channa argus) fish that recently colonized Meadow Lake, New York City, New York. These markers displayed moderate levels of allelic diversity (averaging 6.8 alleles/locus) and heterozygosity (averaging 74.2%). Demographic analyses suggested that the Meadow Lake collection has not achieved mutation-drift equilibrium. These results were consistent with instances of deviations from Hardy–Weinberg equilibrium and the presence of some linkage disequilibrium. A comparison of individual pair-wise distances suggested the presence of multiple differentiated groups of related individuals. Results of all analyses are consistent with a pattern of multiple, recent introductions. The microsatellite markers developed for C. argus yielded sufficient genetic diversity to potentially: (1) delineate kinship; (2) elucidate fine-scale population structure; (3) define management (eradication) units; (4) estimate dispersal rates; (5) estimate population sizes; and (6) provide unique demographic perspectives of control or eradication effectiveness.
The upper mantle beneath the Cascade Range: A comparison with the Gulf of California
NASA Technical Reports Server (NTRS)
Walck, M. C.
1984-01-01
Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for investigation of the upper mantle beneath the Cascade Range-Juan de Fuca region, a transitional area encompassing both very young ocean floor and a continental margin. These data consist of 853 seismograms (6 deg delta 42 deg) which produce 1068 travel times and 40 ray parameter estimates. These data are compared directly to another large suite of records representative of structure beneath the Gulf of California, an active spreading center. The spreading center model, GCA, was used as a starting point in WKBJ synthetic seismogram modeling and perturb GCA until the northeast Pacific data are matched. Application of wave field continuation to these two groups of data provides checks on model's consistency with the data as well as an estimate of the resolvability of differences between the two areas. Differences between the models derived from these two data sets are interpretable in terms of lateral structural variation beneath the two regimes.
The use of resighting data to estimate the rate of population growth of the snail kite in Florida
Dreitz, V.J.; Nichols, J.D.; Hines, J.E.; Bennetts, R.E.; Kitchens, W.M.; DeAngelis, D.L.
2002-01-01
The rate of population growth (lambda) is an important demographic parameter used to assess the viability of a population and to develop management and conservation agendas. We examined the use of resighting data to estimate lambda for the snail kite population in Florida from 1997-2000. The analyses consisted of (1) a robust design approach that derives an estimate of lambda from estimates of population size and (2) the Pradel (1996) temporal symmetry (TSM) approach that directly estimates lambda using an open-population capture-recapture model. Besides resighting data, both approaches required information on the number of unmarked individuals that were sighted during the sampling periods. The point estimates of lambda differed between the robust design and TSM approaches, but the 95% confidence intervals overlapped substantially. We believe the differences may be the result of sparse data and do not indicate the inappropriateness of either modelling technique. We focused on the results of the robust design because this approach provided estimates for all study years. Variation among these estimates was smaller than levels of variation among ad hoc estimates based on previously reported index statistics. We recommend that lambda of snail kites be estimated using capture-resighting methods rather than ad hoc counts.
Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.
2014-01-01
There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269
Functional Linear Model with Zero-value Coefficient Function at Sub-regions.
Zhou, Jianhui; Wang, Nae-Yuh; Wang, Naisyin
2013-01-01
We propose a shrinkage method to estimate the coefficient function in a functional linear regression model when the value of the coefficient function is zero within certain sub-regions. Besides identifying the null region in which the coefficient function is zero, we also aim to perform estimation and inferences for the nonparametrically estimated coefficient function without over-shrinking the values. Our proposal consists of two stages. In stage one, the Dantzig selector is employed to provide initial location of the null region. In stage two, we propose a group SCAD approach to refine the estimated location of the null region and to provide the estimation and inference procedures for the coefficient function. Our considerations have certain advantages in this functional setup. One goal is to reduce the number of parameters employed in the model. With a one-stage procedure, it is needed to use a large number of knots in order to precisely identify the zero-coefficient region; however, the variation and estimation difficulties increase with the number of parameters. Owing to the additional refinement stage, we avoid this necessity and our estimator achieves superior numerical performance in practice. We show that our estimator enjoys the Oracle property; it identifies the null region with probability tending to 1, and it achieves the same asymptotic normality for the estimated coefficient function on the non-null region as the functional linear model estimator when the non-null region is known. Numerically, our refined estimator overcomes the shortcomings of the initial Dantzig estimator which tends to under-estimate the absolute scale of non-zero coefficients. The performance of the proposed method is illustrated in simulation studies. We apply the method in an analysis of data collected by the Johns Hopkins Precursors Study, where the primary interests are in estimating the strength of association between body mass index in midlife and the quality of life in physical functioning at old age, and in identifying the effective age ranges where such associations exist.
Assessing fossil fuel CO2 emissions in California using atmospheric observations and models
NASA Astrophysics Data System (ADS)
Graven, H.; Fischer, M. L.; Lueker, T.; Jeong, S.; Guilderson, T. P.; Keeling, R. F.; Bambha, R.; Brophy, K.; Callahan, W.; Cui, X.; Frankenberg, C.; Gurney, K. R.; LaFranchi, B. W.; Lehman, S. J.; Michelsen, H.; Miller, J. B.; Newman, S.; Paplawsky, W.; Parazoo, N. C.; Sloop, C.; Walker, S. J.
2018-06-01
Analysis systems incorporating atmospheric observations could provide a powerful tool for validating fossil fuel CO2 (ffCO2) emissions reported for individual regions, provided that fossil fuel sources can be separated from other CO2 sources or sinks and atmospheric transport can be accurately accounted for. We quantified ffCO2 by measuring radiocarbon (14C) in CO2, an accurate fossil-carbon tracer, at nine observation sites in California for three months in 2014–15. There is strong agreement between the measurements and ffCO2 simulated using a high-resolution atmospheric model and a spatiotemporally-resolved fossil fuel flux estimate. Inverse estimates of total in-state ffCO2 emissions are consistent with the California Air Resources Board’s reported ffCO2 emissions, providing tentative validation of California’s reported ffCO2 emissions in 2014–15. Continuing this prototype analysis system could provide critical independent evaluation of reported ffCO2 emissions and emissions reductions in California, and the system could be expanded to other, more data-poor regions.
Estimating HIV Prevalence in Zimbabwe Using Population-Based Survey Data
Chinomona, Amos; Mwambi, Henry Godwell
2015-01-01
Estimates of HIV prevalence computed using data obtained from sampling a subgroup of the national population may lack the representativeness of all the relevant domains of the population. These estimates are often computed on the assumption that HIV prevalence is uniform across all domains of the population. Use of appropriate statistical methods together with population-based survey data can enhance better estimation of national and subgroup level HIV prevalence and can provide improved explanations of the variation in HIV prevalence across different domains of the population. In this study we computed design-consistent estimates of HIV prevalence, and their respective 95% confidence intervals at both the national and subgroup levels. In addition, we provided a multivariable survey logistic regression model from a generalized linear modelling perspective for explaining the variation in HIV prevalence using demographic, socio-economic, socio-cultural and behavioural factors. Essentially, this study borrows from the proximate determinants conceptual framework which provides guiding principles upon which socio-economic and socio-cultural variables affect HIV prevalence through biological behavioural factors. We utilize the 2010–11 Zimbabwe Demographic and Health Survey (2010–11 ZDHS) data (which are population based) to estimate HIV prevalence in different categories of the population and for constructing the logistic regression model. It was established that HIV prevalence varies greatly with age, gender, marital status, place of residence, literacy level, belief on whether condom use can reduce the risk of contracting HIV and level of recent sexual activity whereas there was no marked variation in HIV prevalence with social status (measured using a wealth index), method of contraceptive and an individual’s level of education. PMID:26624280
Annual sediment flux estimates in a tidal strait using surrogate measurements
Ganju, N.K.; Schoellhamer, D.H.
2006-01-01
Annual suspended-sediment flux estimates through Carquinez Strait (the seaward boundary of Suisun Bay, California) are provided based on surrogate measurements for advective, dispersive, and Stokes drift flux. The surrogates are landward watershed discharge, suspended-sediment concentration at one location in the Strait, and the longitudinal salinity gradient. The first two surrogates substitute for tidally averaged discharge and velocity-weighted suspended-sediment concentration in the Strait, thereby providing advective flux estimates, while Stokes drift is estimated with suspended-sediment concentration alone. Dispersive flux is estimated using the product of longitudinal salinity gradient and the root-mean-square value of velocity-weighted suspended-sediment concentration as an added surrogate variable. Cross-sectional measurements validated the use of surrogates during the monitoring period. During high freshwater flow advective and dispersive flux were in the seaward direction, while landward dispersive flux dominated and advective flux approached zero during low freshwater flow. Stokes drift flux was consistently in the landward direction. Wetter than average years led to net export from Suisun Bay, while dry years led to net sediment import. Relatively low watershed sediment fluxes to Suisun Bay contribute to net export during the wet season, while gravitational circulation in Carquinez Strait and higher suspended-sediment concentrations in San Pablo Bay (seaward end of Carquinez Strait) are responsible for the net import of sediment during the dry season. Annual predictions of suspended-sediment fluxes, using these methods, will allow for a sediment budget for Suisun Bay, which has implications for marsh restoration and nutrient/contaminant transport. These methods also provide a general framework for estimating sediment fluxes in estuarine environments, where temporal and spatial variability of transport are large. ?? 2006 Elsevier Ltd. All rights reserved.
An estimating equation approach to dimension reduction for longitudinal data
Xu, Kelin; Guo, Wensheng; Xiong, Momiao; Zhu, Liping; Jin, Li
2016-01-01
Sufficient dimension reduction has been extensively explored in the context of independent and identically distributed data. In this article we generalize sufficient dimension reduction to longitudinal data and propose an estimating equation approach to estimating the central mean subspace. The proposed method accounts for the covariance structure within each subject and improves estimation efficiency when the covariance structure is correctly specified. Even if the covariance structure is misspecified, our estimator remains consistent. In addition, our method relaxes distributional assumptions on the covariates and is doubly robust. To determine the structural dimension of the central mean subspace, we propose a Bayesian-type information criterion. We show that the estimated structural dimension is consistent and that the estimated basis directions are root-\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$n$\\end{document} consistent, asymptotically normal and locally efficient. Simulations and an analysis of the Framingham Heart Study data confirm the effectiveness of our approach. PMID:27017956
Eckermann, Simon; Coory, Michael; Willan, Andrew R
2011-02-01
Economic analysis and assessment of net clinical benefit often requires estimation of absolute risk difference (ARD) for binary outcomes (e.g. survival, response, disease progression) given baseline epidemiological risk in a jurisdiction of interest and trial evidence of treatment effects. Typically, the assumption is made that relative treatment effects are constant across baseline risk, in which case relative risk (RR) or odds ratios (OR) could be applied to estimate ARD. The objective of this article is to establish whether such use of RR or OR allows consistent estimates of ARD. ARD is calculated from alternative framing of effects (e.g. mortality vs survival) applying standard methods for translating evidence with RR and OR. For RR, the RR is applied to baseline risk in the jurisdiction to estimate treatment risk; for OR, the baseline risk is converted to odds, the OR applied and the resulting treatment odds converted back to risk. ARD is shown to be consistently estimated with OR but changes with framing of effects using RR wherever there is a treatment effect and epidemiological risk differs from trial risk. Additionally, in indirect comparisons, ARD is shown to be consistently estimated with OR, while calculation with RR allows inconsistency, with alternative framing of effects in the direction, let alone the extent, of ARD. OR ensures consistent calculation of ARD in translating evidence from trial settings and across trials in direct and indirect comparisons, avoiding inconsistencies from RR with alternative outcome framing and associated biases. These findings are critical for consistently translating evidence to inform economic analysis and assessment of net clinical benefit, as translation of evidence is proposed precisely where the advantages of OR over RR arise.
Rachel Riemann; Ty Wilson; Andrew Lister
2012-01-01
We recently developed an assessment protocol that provides information on the magnitude, location, frequency and type of error in geospatial datasets of continuous variables (Riemann et al. 2010). The protocol consists of a suite of assessment metrics which include an examination of data distributions and areas estimates, at several scales, examining each in the form...
NASA Technical Reports Server (NTRS)
Buntine, Wray
1994-01-01
IND computer program introduces Bayesian and Markov/maximum-likelihood (MML) methods and more-sophisticated methods of searching in growing trees. Produces more-accurate class-probability estimates important in applications like diagnosis. Provides range of features and styles with convenience for casual user, fine-tuning for advanced user or for those interested in research. Consists of four basic kinds of routines: data-manipulation, tree-generation, tree-testing, and tree-display. Written in C language.
Model for macroevolutionary dynamics.
Maruvka, Yosef E; Shnerb, Nadav M; Kessler, David A; Ricklefs, Robert E
2013-07-02
The highly skewed distribution of species among genera, although challenging to macroevolutionists, provides an opportunity to understand the dynamics of diversification, including species formation, extinction, and morphological evolution. Early models were based on either the work by Yule [Yule GU (1925) Philos Trans R Soc Lond B Biol Sci 213:21-87], which neglects extinction, or a simple birth-death (speciation-extinction) process. Here, we extend the more recent development of a generic, neutral speciation-extinction (of species)-origination (of genera; SEO) model for macroevolutionary dynamics of taxon diversification. Simulations show that deviations from the homogeneity assumptions in the model can be detected in species-per-genus distributions. The SEO model fits observed species-per-genus distributions well for class-to-kingdom-sized taxonomic groups. The model's predictions for the appearance times (the time of the first existing species) of the taxonomic groups also approximately match estimates based on molecular inference and fossil records. Unlike estimates based on analyses of phylogenetic reconstruction, fitted extinction rates for large clades are close to speciation rates, consistent with high rates of species turnover and the relatively slow change in diversity observed in the fossil record. Finally, the SEO model generally supports the consistency of generic boundaries based on morphological differences between species and provides a comparator for rates of lineage splitting and morphological evolution.
Mining Rare Events Data for Assessing Customer Attrition Risk
NASA Astrophysics Data System (ADS)
Au, Tom; Chin, Meei-Ling Ivy; Ma, Guangqin
Customer attrition refers to the phenomenon whereby a customer leaves a service provider. As competition intensifies, preventing customers from leaving is a major challenge to many businesses such as telecom service providers. Research has shown that retaining existing customers is more profitable than acquiring new customers due primarily to savings on acquisition costs, the higher volume of service consumption, and customer referrals. For a large enterprise, its customer base consists of tens of millions service subscribers, more often the events, such as switching to competitors or canceling services are large in absolute number, but rare in percentage, far less than 5%. Based on a simple random sample, popular statistical procedures, such as logistic regression, tree-based method and neural network, can sharply underestimate the probability of rare events, and often result a null model (no significant predictors). To improve efficiency and accuracy for event probability estimation, a case-based data collection technique is then considered. A case-based sample is formed by taking all available events and a small, but representative fraction of nonevents from a dataset of interest. In this article we showed a consistent prior correction method for events probability estimation and demonstrated the performance of the above data collection techniques in predicting customer attrition with actual telecommunications data.
Magnetospheric Multiscale Mission (MMS) Phase 2B Navigation Performance
NASA Technical Reports Server (NTRS)
Scaperoth, Paige Thomas; Long, Anne; Carpenter, Russell
2009-01-01
The Magnetospheric Multiscale (MMS) formation flying mission, which consists of four spacecraft flying in a tetrahedral formation, has challenging navigation requirements associated with determining and maintaining the relative separations required to meet the science requirements. The baseline navigation concept for MMS is for each spacecraft to independently estimate its position, velocity and clock states using GPS pseudorange data provided by the Goddard Space Flight Center-developed Navigator receiver and maneuver acceleration measurements provided by the spacecraft's attitude control subsystem. State estimation is performed onboard in real-time using the Goddard Enhanced Onboard Navigation System flight software, which is embedded in the Navigator receiver. The current concept of operations for formation maintenance consists of a sequence of two maintenance maneuvers that is performed every 2 weeks. Phase 2b of the MMS mission, in which the spacecraft are in 1.2 x 25 Earth radii orbits with nominal separations at apogee ranging from 30 km to 400 km, has the most challenging navigation requirements because, during this phase, GPS signal acquisition is restricted to less than one day of the 2.8-day orbit. This paper summarizes the results from high-fidelity simulations to determine if the MMS navigation requirements can be met between and immediately following the maintenance maneuver sequence in Phase 2b.
Amano, Nobuko; Nakamura, Tomiyo
2018-02-01
The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P < 0.001) indicated that the reliability of the visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.
Mathias, Susan D; Gao, Sue K; Rutstein, Mark; Snyder, Claire F; Wu, Albert W; Cella, David
2009-02-01
Interpretation of data from health-related quality of life (HRQoL) questionnaires can be enhanced with the availability of minimally important difference (MID) estimates. This information will aid clinicians in interpreting HRQoL differences within patients over time and between treatment groups. The Immune Thrombocytopenic Purpura (ITP)-Patient Assessment Questionnaire (PAQ) is the only comprehensive HRQoL questionnaire available for adults with ITP. Forty centers from within the US and Europe enrolled ITP patients into one of two multicenter, randomized, placebo-controlled, double-blind, 6-month, phase III clinical trials of romiplostim. Patients enrolled in these studies self-administered the ITP-PAQ and two items assessing global change (anchors) at baseline and weeks 4, 12, and 24. Using data from the ITP-PAQ and these two anchors, an anchor-based estimate was computed and combined with the standard error of measurement and standard deviation to compute a distribution-based estimate in order to provide an MID range for each of the 11 scales of the ITP-PAQ. A total of 125 patients participated in these clinical trials and provided data for use in these analyses. Combining results from anchor- and distribution-based approaches, MID values were computed for 9 of the 11 scales. MIDs ranged from 8 to 12 points for Symptoms, Bother, Psychological, Overall QOL, Social Activity, Menstrual Symptoms, and Fertility, while the range was 10 to 15 points for the Fatigue and Activity scales of the ITP-PAQ. These estimates, while slightly higher than other published MID estimates, were consistent with moderate effect sizes. These MID estimates will serve as a useful tool to researchers and clinicians using the ITP-PAQ, providing guidance for interpretation of baseline scores as well as changes in ITP-PAQ scores over time. Additional work should be done to finalize these initial estimates using more appropriate anchors that correlate more highly with the ITP-PAQ scales.
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
New agreement measures based on survival processes
Guo, Ying; Li, Ruosha; Peng, Limin; Manatunga, Amita K.
2013-01-01
Summary The need to assess agreement arises in many scenarios in biomedical sciences when measurements were taken by different methods on the same subjects. When the endpoints are survival outcomes, the study of agreement becomes more challenging given the special characteristics of time-to-event data. In this paper, we propose a new framework for assessing agreement based on survival processes that can be viewed as a natural representation of time-to-event outcomes. Our new agreement measure is formulated as the chance-corrected concordance between survival processes. It provides a new perspective for studying the relationship between correlated survival outcomes and offers an appealing interpretation as the agreement between survival times on the absolute distance scale. We provide a multivariate extension of the proposed agreement measure for multiple methods. Furthermore, the new framework enables a natural extension to evaluate time-dependent agreement structure. We develop nonparametric estimation of the proposed new agreement measures. Our estimators are shown to be strongly consistent and asymptotically normal. We evaluate the performance of the proposed estimators through simulation studies and then illustrate the methods using a prostate cancer data example. PMID:23844617
Viscous self interacting dark matter and cosmic acceleration
NASA Astrophysics Data System (ADS)
Atreya, Abhishek; Bhatt, Jitesh R.; Mishra, Arvind
2018-02-01
Self interacting dark matter (SIDM) provides us with a consistent solution to certain astrophysical observations in conflict with collision-less cold DM paradigm. In this work we estimate the shear viscosity (η) and bulk viscosity (ζ) of SIDM, within kinetic theory formalism, for galactic and cluster size SIDM halos. To that extent we make use of the recent constraints on SIDM cross-section for the dwarf galaxies, LSB galaxies and clusters. We also estimate the change in solution of Einstein's equation due to these viscous effects and find that σ/m constraints on SIDM from astrophysical data provide us with sufficient viscosity to account for the observed cosmic acceleration at present epoch, without the need of any additional dark energy component. Using the estimates of dark matter density for galactic and cluster size halo we find that the mean free path of dark matter ~ few Mpc. Thus the smallest scale at which the viscous effect start playing the role is cluster scale. Astrophysical data for dwarf, LSB galaxies and clusters also seems to suggest the same. The entire analysis is independent of any specific particle physics motivated model for SIDM.
Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters
NASA Astrophysics Data System (ADS)
Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon
2018-04-01
In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung; Liao, Liang; Jones, Jeffrey A.; Kwiatkowski, John M.
2015-01-01
It has long been recognized that path-integrated attenuation (PIA) can be used to improve precipitation estimates from high-frequency weather radar data. One approach that provides an estimate of this quantity from airborne or spaceborne radar data is the surface reference technique (SRT), which uses measurements of the surface cross section in the presence and absence of precipitation. Measurements from the dual-frequency precipitation radar (DPR) on the Global Precipitation Measurement (GPM) satellite afford the first opportunity to test the method for spaceborne radar data at Ka band as well as for the Ku-band-Ka-band combination. The study begins by reviewing the basis of the single- and dual-frequency SRT. As the performance of the method is closely tied to the behavior of the normalized radar cross section (NRCS or sigma(0)) of the surface, the statistics of sigma(0) derived from DPR measurements are given as a function of incidence angle and frequency for ocean and land backgrounds over a 1-month period. Several independent estimates of the PIA, formed by means of different surface reference datasets, can be used to test the consistency of the method since, in the absence of error, the estimates should be identical. Along with theoretical considerations, the comparisons provide an initial assessment of the performance of the single- and dual-frequency SRT for the DPR. The study finds that the dual-frequency SRT can provide improvement in the accuracy of path attenuation estimates relative to the single-frequency method, particularly at Ku band.
Development of Risk Uncertainty Factors from Historical NASA Projects
NASA Technical Reports Server (NTRS)
Amer, Tahani R.
2011-01-01
NASA is a good investment of federal funds and strives to provide the best value to the nation. NASA has consistently budgeted to unrealistic cost estimates, which are evident in the cost growth in many of its programs. In this investigation, NASA has been using available uncertainty factors from the Aerospace Corporation, Air Force, and Booz Allen Hamilton to develop projects risk posture. NASA has no insight into the developmental of these factors and, as demonstrated here, this can lead to unrealistic risks in many NASA Programs and projects (P/p). The primary contribution of this project is the development of NASA missions uncertainty factors, from actual historical NASA projects, to aid cost-estimating as well as for independent reviews which provide NASA senior management with information and analysis to determine the appropriate decision regarding P/p. In general terms, this research project advances programmatic analysis for NASA projects.
Ruist, Joakim
2016-11-01
This study investigates the effects of the macroeconomic context on attitudes to immigration. Earlier studies do in some cases not provide significant empirical support for the existence of important such effects. In this article it is argued that this lack of consistent evidence is mainly due to the cross-national setup of these studies being vulnerable to estimation bias caused by country-specific factors. The present study instead analyzes attitude variation within countries over time. The results provide firm empirical support in favor of macroeconomic variation importantly affecting attitudes to immigration. As an illustration, the estimates indicate that the number of individuals in the average European country in 2012 who were against all immigration from poorer countries outside Europe was 40% higher than it would have been if macroeconomic conditions in that year had been as good as they were in 2006. Copyright © 2016 Elsevier Inc. All rights reserved.
Rapid estimation of high-parameter auditory-filter shapes
Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.
2014-01-01
A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086
Regularized estimation of Euler pole parameters
NASA Astrophysics Data System (ADS)
Aktuğ, Bahadir; Yildirim, Ömer
2013-07-01
Euler vectors provide a unified framework to quantify the relative or absolute motions of tectonic plates through various geodetic and geophysical observations. With the advent of space geodesy, Euler parameters of several relatively small plates have been determined through the velocities derived from the space geodesy observations. However, the available data are usually insufficient in number and quality to estimate both the Euler vector components and the Euler pole parameters reliably. Since Euler vectors are defined globally in an Earth-centered Cartesian frame, estimation with the limited geographic coverage of the local/regional geodetic networks usually results in highly correlated vector components. In the case of estimating the Euler pole parameters directly, the situation is even worse, and the position of the Euler pole is nearly collinear with the magnitude of the rotation rate. In this study, a new method, which consists of an analytical derivation of the covariance matrix of the Euler vector in an ideal network configuration, is introduced and a regularized estimation method specifically tailored for estimating the Euler vector is presented. The results show that the proposed method outperforms the least squares estimation in terms of the mean squared error.
NASA Astrophysics Data System (ADS)
Miller, S. M.; Andrews, A. E.; Benmergui, J. S.; Commane, R.; Dlugokencky, E. J.; Janssens-Maenhout, G.; Melton, J. R.; Michalak, A. M.; Sweeney, C.; Worthy, D. E. J.
2015-12-01
Existing estimates of methane fluxes from wetlands differ in both magnitude and distribution across North America. We discuss seven different bottom-up methane estimates in the context of atmospheric methane data collected across the US and Canada. In the first component of this study, we explore whether the observation network can even detect a methane pattern from wetlands. We find that the observation network can identify a methane pattern from Canadian wetlands but not reliably from US wetlands. Over Canada, the network can even identify spatial patterns at multi-provence scales. Over the US, by contrast, anthropogenic emissions and modeling errors obscure atmospheric patterns from wetland fluxes. In the second component of the study, we then use these observations to reconcile disagreements in the magnitude, seasonal cycle, and spatial distribution of existing estimates. Most existing estimates predict fluxes that are too large with a seasonal cycle that is too narrow. A model known as LPJ-Bern has a spatial distribution most consistent with atmospheric observations. By contrast, a spatially-constant model outperforms the distribution of most existing flux estimates across Canada. The results presented here provide several pathways to reduce disagreements among existing wetland flux estimates across North America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faigler, S.; Mazeh, T.
We analyzed the Kepler light curves of four transiting hot Jupiter systems—KOI-13, HAT-P-7, TrES-2, and Kepler-76, which show BEaming, Ellipsoidal, and Reflection (BEER) phase modulations. The mass of the four planets can be estimated from either the beaming or the ellipsoidal amplitude, given the mass and radius of their parent stars. For KOI-13, HAT-P-7, and Kepler-76 we find that the beaming-based planetary mass estimate is larger than the mass estimated from the ellipsoidal amplitude, consistent with previous studies. This apparent discrepancy may be explained by equatorial superrotation of the planet atmosphere, which induces an angle shift of the planet reflection/emissionmore » phase modulation, as was suggested for Kepler-76 in the first paper of this series. We propose a modified BEER model that supports superrotation, assuming either a Lambertian or geometric reflection/emission phase function, and provides a photometry-consistent estimate of the planetary mass. Our analysis shows that for Kepler-76 and HAT-P-7, the Lambertian superrotation BEER model is highly preferable over an unshifted null model, while for KOI-13 it is preferable only at a 1.4σ level. For TrES-2 we do not find such preference. For all four systems the Lambertian superrotation model mass estimates are in excellent agreement with the planetary masses derived from, or constrained by, radial velocity measurements. This makes the Lambertian superrotation BEER model a viable tool for estimating the masses of hot Jupiters from photometry alone. We conclude that hot Jupiter superrotation may be a common phenomenon that can be detected in the visual light curves of Kepler.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xingchang; Wang, Chuankuan; Bond-Lamberty, Benjamin
Carbon dioxide (CO 2) fluxes between terrestrial ecosystems and the atmosphere are primarily measured with eddy covariance (EC), biometric, and chamber methods. However, it is unclear why the estimates of CO 2-fluxes, when measured using these different methods, converge at some sites but diverge at others. We synthesized a novel global dataset of forest CO 2-fluxes to evaluate the consistency between EC and biometric or chamber methods for quantifying CO 2 budget in forests. The EC approach, comparing with the other two methods, tended to produce 25% higher estimate of net ecosystem production (NEP, 0.52Mg C ha-1 yr-1), mainly resultingmore » from lower EC-estimated Re; 10% lower ecosystem respiration (Re, 1.39Mg C ha-1 yr-1); and 3% lower gross primary production (0.48 Mg C ha-1 yr-1) The discrepancies between EC and the other methods were higher at sites with complex topography and dense canopies versus those with flat topography and open canopies. Forest age also influenced the discrepancy through the change of leaf area index. The open-path EC system induced >50% of the discrepancy in NEP, presumably due to its surface heating effect. These results provided strong evidence that EC produces biased estimates of NEP and Re in forest ecosystems. A global extrapolation suggested that the discrepancies in CO 2 fluxes between methods were consistent with a global underestimation of Re, and overestimation of NEP, by the EC method. Accounting for these discrepancies would substantially improve the our estimates of the terrestrial carbon budget .« less
Han, Paul K J; Klein, William M P; Lehman, Tom; Killam, Bill; Massett, Holly; Freedman, Andrew N
2011-01-01
To examine the effects of communicating uncertainty regarding individualized colorectal cancer risk estimates and to identify factors that influence these effects. Two Web-based experiments were conducted, in which adults aged 40 years and older were provided with hypothetical individualized colorectal cancer risk estimates differing in the extent and representation of expressed uncertainty. The uncertainty consisted of imprecision (otherwise known as "ambiguity") of the risk estimates and was communicated using different representations of confidence intervals. Experiment 1 (n = 240) tested the effects of ambiguity (confidence interval v. point estimate) and representational format (textual v. visual) on cancer risk perceptions and worry. Potential effect modifiers, including personality type (optimism), numeracy, and the information's perceived credibility, were examined, along with the influence of communicating uncertainty on responses to comparative risk information. Experiment 2 (n = 135) tested enhanced representations of ambiguity that incorporated supplemental textual and visual depictions. Communicating uncertainty led to heightened cancer-related worry in participants, exemplifying the phenomenon of "ambiguity aversion." This effect was moderated by representational format and dispositional optimism; textual (v. visual) format and low (v. high) optimism were associated with greater ambiguity aversion. However, when enhanced representations were used to communicate uncertainty, textual and visual formats showed similar effects. Both the communication of uncertainty and use of the visual format diminished the influence of comparative risk information on risk perceptions. The communication of uncertainty regarding cancer risk estimates has complex effects, which include heightening cancer-related worry-consistent with ambiguity aversion-and diminishing the influence of comparative risk information on risk perceptions. These responses are influenced by representational format and personality type, and the influence of format appears to be modifiable and content dependent.
Estimating the value of non-use benefits from small changes in the provision of ecosystem services.
Dutton, Adam; Edwards-Jones, Gareth; Macdonald, David W
2010-12-01
The unit of trade in ecosystem services is usually the use of a proportion of the parcels of land associated with a given service. Valuing small changes in the provision of an ecosystem service presents obstacles, particularly when the service provides non-use benefits, as is the case with conservation of most plants and animals. Quantifying non-use values requires stated-preference valuations. Stated-preference valuations can provide estimates of the public's willingness to pay for a broad conservation goal. Nevertheless, stated-preference valuations can be expensive and do not produce consistent measures for varying levels of provision of a service. Additionally, the unit of trade, land use, is not always linearly related to the level of ecosystem services the land might provide. To overcome these obstacles, we developed a method to estimate the value of a marginal change in the provision of a non-use ecosystem service--in this case conservation of plants or animals associated with a given land-cover type. Our method serves as a tool for calculating transferable valuations of small changes in the provision of ecosystem services relative to the existing provision. Valuation is achieved through stated-preference investigations, calculation of a unit value for a parcel of land, and the weighting of this parcel by its ability to provide the desired ecosystem service and its effect on the ability of the surrounding land parcels to provide the desired service. We used the water vole (Arvicola terrestris) as a case study to illustrate the method. The average present value of a meter of water vole habitat was estimated at UK £ 12, but the marginal value of a meter (based on our methods) could range between £ 0 and £ 40 or more. © 2010 Society for Conservation Biology.
Transportation Sector Model of the National Energy Modeling System. Volume 2 -- Appendices: Part 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The attachments contained within this appendix provide additional details about the model development and estimation process which do not easily lend themselves to incorporation in the main body of the model documentation report. The information provided in these attachments is not integral to the understanding of the model`s operation, but provides the reader with opportunity to gain a deeper understanding of some of the model`s underlying assumptions. There will be a slight degree of replication of materials found elsewhere in the documentation, made unavoidable by the dictates of internal consistency. Each attachment is associated with a specific component of themore » transportation model; the presentation follows the same sequence of modules employed in Volume 1. The following attachments are contained in Appendix F: Fuel Economy Model (FEM)--provides a discussion of the FEM vehicle demand and performance by size class models; Alternative Fuel Vehicle (AFV) Model--describes data input sources and extrapolation methodologies; Light-Duty Vehicle (LDV) Stock Model--discusses the fuel economy gap estimation methodology; Light Duty Vehicle Fleet Model--presents the data development for business, utility, and government fleet vehicles; Light Commercial Truck Model--describes the stratification methodology and data sources employed in estimating the stock and performance of LCT`s; Air Travel Demand Model--presents the derivation of the demographic index, used to modify estimates of personal travel demand; and Airborne Emissions Model--describes the derivation of emissions factors used to associate transportation measures to levels of airborne emissions of several pollutants.« less
Gitonga, Caroline W.; Gillig, Jonathan; Owaga, Chrispin; Marube, Elizabeth; Odongo, Wycliffe; Okoth, Albert; China, Pauline; Oriango, Robin; Brooker, Simon J.; Bousema, Teun; Drakeley, Chris; Cox, Jonathan
2013-01-01
Background School surveys provide an operational approach to assess malaria transmission through parasite prevalence. There is limited evidence on the comparability of prevalence estimates obtained from school and community surveys carried out at the same locality. Methods Concurrent school and community cross-sectional surveys were conducted in 46 school/community clusters in the western Kenyan highlands and households of school children were geolocated. Malaria was assessed by rapid diagnostic test (RDT) and combined seroprevalence of antibodies to bloodstage Plasmodium falciparum antigens. Results RDT prevalence in school and community populations was 25.7% (95% CI: 24.4-26.8) and 15.5% (95% CI: 14.4-16.7), respectively. Seroprevalence in the school and community populations was 51.9% (95% CI: 50.5-53.3) and 51.5% (95% CI: 49.5-52.9), respectively. RDT prevalence in schools could differentiate between low (<7%, 95% CI: 0-19%) and high (>39%, 95% CI: 25-49%) transmission areas in the community and, after a simple adjustment, were concordant with the community estimates. Conclusions Estimates of malaria prevalence from school surveys were consistently higher than those from community surveys and were strongly correlated. School-based estimates can be used as a reliable indicator of malaria transmission intensity in the wider community and may provide a basis for identifying priority areas for malaria control. PMID:24143250
Dimensionless, Scale Invariant, Edge Weight Metric for the Study of Complex Structural Networks
Colon-Perez, Luis M.; Spindler, Caitlin; Goicochea, Shelby; Triplett, William; Parekh, Mansi; Montie, Eric; Carney, Paul R.; Price, Catherine; Mareci, Thomas H.
2015-01-01
High spatial and angular resolution diffusion weighted imaging (DWI) with network analysis provides a unique framework for the study of brain structure in vivo. DWI-derived brain connectivity patterns are best characterized with graph theory using an edge weight to quantify the strength of white matter connections between gray matter nodes. Here a dimensionless, scale-invariant edge weight is introduced to measure node connectivity. This edge weight metric provides reasonable and consistent values over any size scale (e.g. rodents to humans) used to quantify the strength of connection. Firstly, simulations were used to assess the effects of tractography seed point density and random errors in the estimated fiber orientations; with sufficient signal-to-noise ratio (SNR), edge weight estimates improve as the seed density increases. Secondly to evaluate the application of the edge weight in the human brain, ten repeated measures of DWI in the same healthy human subject were analyzed. Mean edge weight values within the cingulum and corpus callosum were consistent and showed low variability. Thirdly, using excised rat brains to study the effects of spatial resolution, the weight of edges connecting major structures in the temporal lobe were used to characterize connectivity in this local network. The results indicate that with adequate resolution and SNR, connections between network nodes are characterized well by this edge weight metric. Therefore this new dimensionless, scale-invariant edge weight metric provides a robust measure of network connectivity that can be applied in any size regime. PMID:26173147
Dimensionless, Scale Invariant, Edge Weight Metric for the Study of Complex Structural Networks.
Colon-Perez, Luis M; Spindler, Caitlin; Goicochea, Shelby; Triplett, William; Parekh, Mansi; Montie, Eric; Carney, Paul R; Price, Catherine; Mareci, Thomas H
2015-01-01
High spatial and angular resolution diffusion weighted imaging (DWI) with network analysis provides a unique framework for the study of brain structure in vivo. DWI-derived brain connectivity patterns are best characterized with graph theory using an edge weight to quantify the strength of white matter connections between gray matter nodes. Here a dimensionless, scale-invariant edge weight is introduced to measure node connectivity. This edge weight metric provides reasonable and consistent values over any size scale (e.g. rodents to humans) used to quantify the strength of connection. Firstly, simulations were used to assess the effects of tractography seed point density and random errors in the estimated fiber orientations; with sufficient signal-to-noise ratio (SNR), edge weight estimates improve as the seed density increases. Secondly to evaluate the application of the edge weight in the human brain, ten repeated measures of DWI in the same healthy human subject were analyzed. Mean edge weight values within the cingulum and corpus callosum were consistent and showed low variability. Thirdly, using excised rat brains to study the effects of spatial resolution, the weight of edges connecting major structures in the temporal lobe were used to characterize connectivity in this local network. The results indicate that with adequate resolution and SNR, connections between network nodes are characterized well by this edge weight metric. Therefore this new dimensionless, scale-invariant edge weight metric provides a robust measure of network connectivity that can be applied in any size regime.
Soller, Jeffrey A; Eftim, Sorina E; Nappier, Sharon P
2018-01-01
Understanding pathogen risks is a critically important consideration in the design of water treatment, particularly for potable reuse projects. As an extension to our published microbial risk assessment methodology to estimate infection risks associated with Direct Potable Reuse (DPR) treatment train unit process combinations, herein, we (1) provide an updated compilation of pathogen density data in raw wastewater and dose-response models; (2) conduct a series of sensitivity analyses to consider potential risk implications using updated data; (3) evaluate the risks associated with log credit allocations in the United States; and (4) identify reference pathogen reductions needed to consistently meet currently applied benchmark risk levels. Sensitivity analyses illustrated changes in cumulative annual risks estimates, the significance of which depends on the pathogen group driving the risk for a given treatment train. For example, updates to norovirus (NoV) raw wastewater values and use of a NoV dose-response approach, capturing the full range of uncertainty, increased risks associated with one of the treatment trains evaluated, but not the other. Additionally, compared to traditional log-credit allocation approaches, our results indicate that the risk methodology provides more nuanced information about how consistently public health benchmarks are achieved. Our results indicate that viruses need to be reduced by 14 logs or more to consistently achieve currently applied benchmark levels of protection associated with DPR. The refined methodology, updated model inputs, and log credit allocation comparisons will be useful to regulators considering DPR projects and design engineers as they consider which unit treatment processes should be employed for particular projects. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.
2017-12-01
In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.
Porto, Paolo; Walling, Des E
2012-04-01
Soil erosion represents an important threat to the long-term sustainability of agriculture and forestry in many areas of the world, including southern Italy. Numerous models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution, based on the local topography, hydrometeorology, soil type and land management. However, there remains an important need for empirical measurements to provide a basis for validating and calibrating such models and prediction procedures as well as to support specific investigations and experiments. In this context, erosion plots provide useful information on gross rates of soil loss, but are unable to document the efficiency of the onward transfer of the eroded sediment within a field and towards the stream system, and thus net rates of soil loss from larger areas. The use of environmental radionuclides, particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)), as a means of estimating rates of soil erosion and deposition has attracted increasing attention in recent years and the approach has now been recognised as possessing several important advantages. In order to provide further confirmation of the validity of the estimates of longer-term erosion and soil redistribution rates provided by (137)Cs and (210)Pb(ex) measurements, there is a need for studies aimed explicitly at validating the results obtained. In this context, the authors directed attention to the potential offered by a set of small erosion plots located near Reggio Calabria in southern Italy, for validating estimates of soil loss provided by (137)Cs and (210)Pb(ex) measurements. A preliminary assessment suggested that, notwithstanding the limitations and constraints involved, a worthwhile investigation aimed at validating the use of (137)Cs and (210)Pb(ex) measurements to estimate rates of soil loss from cultivated land could be undertaken. The results demonstrate a close consistency between the measured rates of soil loss and the estimates provided by the (137)Cs and (210)Pb(ex) measurements and can therefore been seen as validating the use of these fallout radionuclides to document soil erosion rates in that environment. Further studies are clearly required to exploit other opportunities for validation in contrasting environments and under different land use conditions. Copyright © 2011 Elsevier Ltd. All rights reserved.
The retention time of inorganic mercury in the brain — A systematic review of the evidence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rooney, James P.K., E-mail: jrooney@rcsi.ie
2014-02-01
Reports from human case studies indicate a half-life for inorganic mercury in the brain in the order of years—contradicting older radioisotope studies that estimated half-lives in the order of weeks to months in duration. This study systematically reviews available evidence on the retention time of inorganic mercury in humans and primates to better understand this conflicting evidence. A broad search strategy was used to capture 16,539 abstracts on the Pubmed database. Abstracts were screened to include only study types containing relevant information. 131 studies of interest were identified. Only 1 primate study made a numeric estimate for the half-life ofmore » inorganic mercury (227–540 days). Eighteen human mercury poisoning cases were followed up long term including autopsy. Brain inorganic mercury concentrations at death were consistent with a half-life of several years or longer. 5 radionucleotide studies were found, one of which estimated head half-life (21 days). This estimate has sometimes been misinterpreted to be equivalent to brain half-life—which ignores several confounding factors including limited radioactive half-life and radioactive decay from surrounding tissues including circulating blood. No autopsy cohort study estimated a half-life for inorganic mercury, although some noted bioaccumulation of brain mercury with age. Modelling studies provided some extreme estimates (69 days vs 22 years). Estimates from modelling studies appear sensitive to model assumptions, however predications based on a long half-life (27.4 years) are consistent with autopsy findings. In summary, shorter estimates of half-life are not supported by evidence from animal studies, human case studies, or modelling studies based on appropriate assumptions. Evidence from such studies point to a half-life of inorganic mercury in human brains of several years to several decades. This finding carries important implications for pharmcokinetic modelling of mercury and potentially for the regulatory toxicology of mercury.« less
Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions
NASA Astrophysics Data System (ADS)
Yang, X.
2015-12-01
We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.
Comparison of Sap Flow- and White's Equation-Based Estimates of Groundwater Evapotranspiration
NASA Astrophysics Data System (ADS)
Widdowson, M.; Harding, B.
2017-12-01
Estimates of evapotranspiration (ET) of groundwater are useful at sites where phytoremediation is a component of the remedial strategy and the management of contaminant plumes. Methods to quantify direct ET of groundwater rely on multiple lines of evidence but are often limited to the measurement of water table levels and analysis of diurnal trends (e.g., White's Equation and related derivative methods). In this study, sap flow was collected and combined with monitoring of groundwater levels during the entire growing season at a site located in the Atlantic Coastal Plain (Georgia, USA). Our objective was to quantify temporal variations in estimates of groundwater ET in a phytoremediation test plot consisting of approximately 370 trees at a creosote-contaminated source zone. Trees ranging from 8-cm to 9-cm in diameter were instrumented with thermal dissipation sap velocity probes connected to a recording data logger. Wells and piezometers screened across the water table located within and around the periphery of the stand of trees were instrumented with recording pressure transducers. Sap flow estimates using the Granier method varied from 1 to 3 L/d per tree in dry months to 1 to 15 L/d per tree during periods of frequent precipitation and high ET potential. Results show no clear or consistent relationship between estimates of groundwater ET derived from water table fluctuations and sap flow results during the entire period of performance. However, this approach provides an upper and lower bound of groundwater consumption and concomitant plant uptake of light-weight polycyclic aromatic hydrocarbons.
3D shape reconstruction of specular surfaces by using phase measuring deflectometry
NASA Astrophysics Data System (ADS)
Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan
2016-10-01
The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.
Design of Low-Cost Vehicle Roll Angle Estimator Based on Kalman Filters and an Iot Architecture.
Garcia Guzman, Javier; Prieto Gonzalez, Lisardo; Pajares Redondo, Jonatan; Sanz Sanchez, Susana; Boada, Beatriz L
2018-06-03
In recent years, there have been many advances in vehicle technologies based on the efficient use of real-time data provided by embedded sensors. Some of these technologies can help you avoid or reduce the severity of a crash such as the Roll Stability Control (RSC) systems for commercial vehicles. In RSC, several critical variables to consider such as sideslip or roll angle can only be directly measured using expensive equipment. These kind of devices would increase the price of commercial vehicles. Nevertheless, sideslip or roll angle or values can be estimated using MEMS sensors in combination with data fusion algorithms. The objectives stated for this research work consist of integrating roll angle estimators based on Linear and Unscented Kalman filters to evaluate the precision of the results obtained and determining the fulfillment of the hard real-time processing constraints to embed this kind of estimators in IoT architectures based on low-cost equipment able to be deployed in commercial vehicles. An experimental testbed composed of a van with two sets of low-cost kits was set up, the first one including a Raspberry Pi 3 Model B, and the other having an Intel Edison System on Chip. This experimental environment was tested under different conditions for comparison. The results obtained from low-cost experimental kits, based on IoT architectures and including estimators based on Kalman filters, provide accurate roll angle estimation. Also, these results show that the processing time to get the data and execute the estimations based on Kalman Filters fulfill hard real time constraints.
Letcher, Benjamin H.; Schueller, Paul; Bassar, Ronald D.; Nislow, Keith H.; Coombs, Jason A.; Sakrejda, Krzysztof; Morrissey, Michael; Sigourney, Douglas B.; Whiteley, Andrew R.; O'Donnell, Matthew J.; Dubreuil, Todd L.
2015-01-01
Modelling the effects of environmental change on populations is a key challenge for ecologists, particularly as the pace of change increases. Currently, modelling efforts are limited by difficulties in establishing robust relationships between environmental drivers and population responses.We developed an integrated capture–recapture state-space model to estimate the effects of two key environmental drivers (stream flow and temperature) on demographic rates (body growth, movement and survival) using a long-term (11 years), high-resolution (individually tagged, sampled seasonally) data set of brook trout (Salvelinus fontinalis) from four sites in a stream network. Our integrated model provides an effective context within which to estimate environmental driver effects because it takes full advantage of data by estimating (latent) state values for missing observations, because it propagates uncertainty among model components and because it accounts for the major demographic rates and interactions that contribute to annual survival.We found that stream flow and temperature had strong effects on brook trout demography. Some effects, such as reduction in survival associated with low stream flow and high temperature during the summer season, were consistent across sites and age classes, suggesting that they may serve as robust indicators of vulnerability to environmental change. Other survival effects varied across ages, sites and seasons, indicating that flow and temperature may not be the primary drivers of survival in those cases. Flow and temperature also affected body growth rates; these responses were consistent across sites but differed dramatically between age classes and seasons. Finally, we found that tributary and mainstem sites responded differently to variation in flow and temperature.Annual survival (combination of survival and body growth across seasons) was insensitive to body growth and was most sensitive to flow (positive) and temperature (negative) in the summer and fall.These observations, combined with our ability to estimate the occurrence, magnitude and direction of fish movement between these habitat types, indicated that heterogeneity in response may provide a mechanism providing potential resilience to environmental change. Given that the challenges we faced in our study are likely to be common to many intensive data sets, the integrated modelling approach could be generally applicable and useful.
Letcher, Benjamin H; Schueller, Paul; Bassar, Ronald D; Nislow, Keith H; Coombs, Jason A; Sakrejda, Krzysztof; Morrissey, Michael; Sigourney, Douglas B; Whiteley, Andrew R; O'Donnell, Matthew J; Dubreuil, Todd L
2015-03-01
Modelling the effects of environmental change on populations is a key challenge for ecologists, particularly as the pace of change increases. Currently, modelling efforts are limited by difficulties in establishing robust relationships between environmental drivers and population responses. We developed an integrated capture-recapture state-space model to estimate the effects of two key environmental drivers (stream flow and temperature) on demographic rates (body growth, movement and survival) using a long-term (11 years), high-resolution (individually tagged, sampled seasonally) data set of brook trout (Salvelinus fontinalis) from four sites in a stream network. Our integrated model provides an effective context within which to estimate environmental driver effects because it takes full advantage of data by estimating (latent) state values for missing observations, because it propagates uncertainty among model components and because it accounts for the major demographic rates and interactions that contribute to annual survival. We found that stream flow and temperature had strong effects on brook trout demography. Some effects, such as reduction in survival associated with low stream flow and high temperature during the summer season, were consistent across sites and age classes, suggesting that they may serve as robust indicators of vulnerability to environmental change. Other survival effects varied across ages, sites and seasons, indicating that flow and temperature may not be the primary drivers of survival in those cases. Flow and temperature also affected body growth rates; these responses were consistent across sites but differed dramatically between age classes and seasons. Finally, we found that tributary and mainstem sites responded differently to variation in flow and temperature. Annual survival (combination of survival and body growth across seasons) was insensitive to body growth and was most sensitive to flow (positive) and temperature (negative) in the summer and fall. These observations, combined with our ability to estimate the occurrence, magnitude and direction of fish movement between these habitat types, indicated that heterogeneity in response may provide a mechanism providing potential resilience to environmental change. Given that the challenges we faced in our study are likely to be common to many intensive data sets, the integrated modelling approach could be generally applicable and useful. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Jin, Dongliang; Coasne, Benoit
2017-10-24
Different molecular simulation strategies are used to assess the stability of methane hydrate under various temperature and pressure conditions. First, using two water molecular models, free energy calculations consisting of the Einstein molecule approach in combination with semigrand Monte Carlo simulations are used to determine the pressure-temperature phase diagram of methane hydrate. With these calculations, we also estimate the chemical potentials of water and methane and methane occupancy at coexistence. Second, we also consider two other advanced molecular simulation techniques that allow probing the phase diagram of methane hydrate: the direct coexistence method in the Grand Canonical ensemble and the hyperparallel tempering Monte Carlo method. These two direct techniques are found to provide stability conditions that are consistent with the pressure-temperature phase diagram obtained using rigorous free energy calculations. The phase diagram obtained in this work, which is found to be consistent with previous simulation studies, is close to its experimental counterpart provided the TIP4P/Ice model is used to describe the water molecule.
Intercomparison Between in situ and AVHRR Polar Pathfinder-Derived Surface Albedo over Greenland
NASA Technical Reports Server (NTRS)
Stroeve, Julienne C.; Box, Jason E.; Fowler, Charles; Haran, Terence; Key, Jeffery
2001-01-01
The Advanced Very High Resolution (AVHRR) Polar Pathfinder Data (APP) provides the first long time series of consistent, calibrated surface albedo and surface temperature data for the polar regions. Validations of these products have consisted of individual studies that analyzed algorithm performance for limited regions and or time periods. This paper reports on comparisons made between the APP-derived surface albedo and that measured at fourteen automatic weather stations (AWS) around the Greenland ice sheet from January 1997 to August 1998. Results show that satellite-derived surface albedo values are on average 10% less than those measured by the AWS stations. However, the station measurements tend to be biased high by about 4% and thus the differences in absolute albedo may be less (e.g. 6%). In regions of the ice sheet where the albedo variability is small, such as the dry snow facies, the APP albedo uncertainty exceeds the natural variability. Further work is needed to improve the absolute accuracy of the APP-derived surface albedo. Even so, the data provide temporally and spatially consistent estimates of the Greenland ice sheet albedo.
NASA Astrophysics Data System (ADS)
Lin, Pei-Sheng; Rosset, Denis; Zhang, Yanbao; Bancal, Jean-Daniel; Liang, Yeong-Cherng
2018-03-01
The device-independent approach to physics is one where conclusions are drawn directly from the observed correlations between measurement outcomes. In quantum information, this approach allows one to make strong statements about the properties of the underlying systems or devices solely via the observation of Bell-inequality-violating correlations. However, since one can only perform a finite number of experimental trials, statistical fluctuations necessarily accompany any estimation of these correlations. Consequently, an important gap remains between the many theoretical tools developed for the asymptotic scenario and the experimentally obtained raw data. In particular, a physical and concurrently practical way to estimate the underlying quantum distribution has so far remained elusive. Here, we show that the natural analogs of the maximum-likelihood estimation technique and the least-square-error estimation technique in the device-independent context result in point estimates of the true distribution that are physical, unique, computationally tractable, and consistent. They thus serve as sound algorithmic tools allowing one to bridge the aforementioned gap. As an application, we demonstrate how such estimates of the underlying quantum distribution can be used to provide, in certain cases, trustworthy estimates of the amount of entanglement present in the measured system. In stark contrast to existing approaches to device-independent parameter estimations, our estimation does not require the prior knowledge of any Bell inequality tailored for the specific property and the specific distribution of interest.
Arcila, Dahiana; Alexander Pyron, R; Tyler, James C; Ortí, Guillermo; Betancur-R, Ricardo
2015-01-01
Time-calibrated phylogenies based on molecular data provide a framework for comparative studies. Calibration methods to combine fossil information with molecular phylogenies are, however, under active development, often generating disagreement about the best way to incorporate paleontological data into these analyses. This study provides an empirical comparison of the most widely used approach based on node-dating priors for relaxed clocks implemented in the programs BEAST and MrBayes, with two recently proposed improvements: one using a new fossilized birth-death process model for node dating (implemented in the program DPPDiv), and the other using a total-evidence or tip-dating method (implemented in MrBayes and BEAST). These methods are applied herein to tetraodontiform fishes, a diverse group of living and extinct taxa that features one of the most extensive fossil records among teleosts. Previous estimates of time-calibrated phylogenies of tetraodontiforms using node-dating methods reported disparate estimates for their age of origin, ranging from the late Jurassic to the early Paleocene (ca. 150-59Ma). We analyzed a comprehensive dataset with 16 loci and 210 morphological characters, including 131 taxa (95 extant and 36 fossil species) representing all families of fossil and extant tetraodontiforms, under different molecular clock calibration approaches. Results from node-dating methods produced consistently younger ages than the tip-dating approaches. The older ages inferred by tip dating imply an unlikely early-late Jurassic (ca. 185-119Ma) origin for this order and the existence of extended ghost lineages in their fossil record. Node-based methods, by contrast, produce time estimates that are more consistent with the stratigraphic record, suggesting a late Cretaceous (ca. 86-96Ma) origin. We show that the precision of clade age estimates using tip dating increases with the number of fossils analyzed and with the proximity of fossil taxa to the node under assessment. This study suggests that current implementations of tip dating may overestimate ages of divergence in calibrated phylogenies. It also provides a comprehensive phylogenetic framework for tetraodontiform systematics and future comparative studies. Copyright © 2014 Elsevier Inc. All rights reserved.
Thermal radiative properties: Nonmetallic solids.
NASA Technical Reports Server (NTRS)
Touloukian, Y. S.; Dewitt, D. P.
1972-01-01
The volume consists of a text on theory, estimation, and measurement, together with its bibliography, the main body of numerical data and its references, and the material index. The text material assumes a role complementary to the main body of numerical data. The physics and basic concepts of thermal radiation are discussed in detail, focusing attention on treatment of nonmetallic materials: theory, estimation, and methods of measurement. Numerical data is presented in a comprehensive manner. The scope of coverage includes the nonmetallic elements and their compounds, intermetallics, polymers, glasses, and minerals. Analyzed data graphs provide an evaluative review of the data. All data have been obtained from their original sources, and each data set is so referenced.
NASA Astrophysics Data System (ADS)
Yoshida, Tsuyoshi; Saito, Naoaki; Ohmura, Hideki
2018-03-01
Intense (5.0 × 1012 W cm-2) nanosecond Fourier-synthesized laser fields consisting of fundamental, second-, third-, and fourth-harmonic light generated by an interferometer-free Fourier-synthesized laser field generator induce orientation-selective ionization based on directionally asymmetric molecular tunneling ionization (TI). The laser field generator ensures adjustment-free operation, high stability, and high reproducibility. Phase-sensitive, orientation-selective molecular TI provides a simple way to estimate the relative phase differences between the fundamental light and each harmonic by data-fitting analysis. This application of Fourier-synthesized laser fields will facilitate not only lightwave engineering but also the control of matter.
Atmospheric halocarbons - A discussion with emphasis on chloroform
NASA Technical Reports Server (NTRS)
Yung, Y. L.; Mcelroy, M. B.; Wofsy, S. C.
1975-01-01
Bleaching of paper pulp represents a major industrial use of chlorine and could provide an environmentally significant source of atmospheric halocarbons. The related global production of chloroform is estimated at 300,000 ton per year and there could be additional production associated with atmospheric decomposition of perchloroethylene. Estimates are given for the production of methyl chloride, methyl bromide and methyl iodide, 5.2 million, 77 thousand, and 740 thousand ton per year respectively. The relative yields of CH3Cl, CH3Br and CH3I are consistent with the hypothesis of a marine biological source for these compounds. Concentrations of other halocarbons observed in the atmosphere appear to indicate industrial sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lelic, Muhidin; Avramovic, Bozidar; Jiang, Tony
The objective of this project was to demonstrate functionality and performance of a Direct Non-iterative State Estimator (DNSE) integrated with NYPA’s Energy Management System (EMS) and with enhanced Real Time Dynamics Monitoring System (RTDMS) synchrophasor platform from Electric Power Group (EPG). DNSE is designed to overcome a major obstacle to operational use of Synchro-Phasor Management Systems (SPMS) by providing to synchrophasor management systems (SPMS) applications a consistent and a complete synchrophasor data foundation in the same way that a traditional EMS State Estimator (SE) provides to EMS applications. Specifically, DNSE is designed to use synchrophasor measurements collected by a centralmore » PDC, Supervisory Control and Data Acquisition (SCADA) measurements, and Energy Management System (EMS) network model, to obtain the complete state of the utility’s operating model at rates that are close to the synchrophasor data rates. In this way, the system is comprehensive in that it does not only cover the part of the network that is “visible” via synchrophasors, but also the part that is only “visible” through the SCADA measurements. Visualization needs associated with the use of DNSE results are fulfilled through suitably enhanced Real Time Dynamics Monitoring System (RTDMS), with the enhancements implemented by EPG. This project had the following goals in mind: To advance the deployment of commercial grade DNSE software application that relies on synchrophasor and SCADA data ; Apply DNSE at other utilities, to address a generic and fundamental need for “clean” operational data for synchrophasor applications; Provide means for “live” estimated data access by control system operators; Enhance potential for situational awareness through full system operational model coverage; Sub-second execution rate of the Direct Non-iterative State Estimator, eventually at a near-phasor data rate execution speed, i.e. < 0.1 sec. Anticipated benefits from this projects are: Enhanced reliability and improvements in the economic efficiency of bulk power system planning and operations; Providing “clean” data to other synchrophasor applications; Enhancement of situational awareness by providing the full operational model updated at near synchrophasor rate; A production-grade software tool that incorporate synchrophasor and SCADA data; Provides a basis for development of next generation monitoring and control applications, based on both SCADA and PMU data. Quanta Technology (QT) team worked in collaboration with Electric Power Group (EPG) whose team has enhanced its commercial Real Time Dynamics Monitoring System (RTDMS) to accommodate the requirements posed by DNSE application. EPG also provided its ePDC and Model-less Data Conditioning (PDVC) software for integration with DNSE+. QT developed the system requirements for DNSE; developed system architecture and defined interfaces between internal DNSE components. The core DNSE algorithm with all surrounding interfaces was named DNSE+. Since the DNSE development was done in a simulated system environment, QT used its PMU simulator that was enhanced during this project for development and factory acceptance testing (FAT). SCADA data in this stage was simulated by commercial PSS/e software. The output of DNSE are estimates of System states in C37.118-2 format, sent to RTDMS for further processing and display. As the number of these states is large, it was necessary to expand the C37.111-2 standard to accommodate large data sets. This enhancement was implemented in RTDMS. The demonstration of pre-production DNSE technology was done at NYPA using streaming field data from NYPA PMUs and from its RTUs through their SCADA system. NYPA provided ICCP interface as well as Common Information Model (CIM). The relevance of the DNSE+ application is that it provides state estimation of the power system based on hybrid set of data, consisting of both available PMU data and SCADA measurements. As this is a direct, non-iterative method of calculation of the system states, if does not suffer from convergence issues which is potential problem for conventional state estimators. Also, it can take any available PMU measurements, so it does not need to have a high percentage of PMU coverage needed in the case of Linear State Estimator. As the DNSE calculates synchrophasors of the system states (both phase and absolute value) as sub-second rate, this application can provide a basis for development of next generation of applications based both on SCADA and PMU data.« less
2014-01-01
Background As it becomes increasingly possible to obtain DNA sequences of orthologous genes from diverse sets of taxa, species trees are frequently being inferred from multilocus data. However, the behavior of many methods for performing this inference has remained largely unexplored. Some methods have been proven to be consistent given certain evolutionary models, whereas others rely on criteria that, although appropriate for many parameter values, have peculiar zones of the parameter space in which they fail to converge on the correct estimate as data sets increase in size. Results Here, using North American pines, we empirically evaluate the behavior of 24 strategies for species tree inference using three alternative outgroups (72 strategies total). The data consist of 120 individuals sampled in eight ingroup species from subsection Strobus and three outgroup species from subsection Gerardianae, spanning ∼47 kilobases of sequence at 121 loci. Each “strategy” for inferring species trees consists of three features: a species tree construction method, a gene tree inference method, and a choice of outgroup. We use multivariate analysis techniques such as principal components analysis and hierarchical clustering to identify tree characteristics that are robustly observed across strategies, as well as to identify groups of strategies that produce trees with similar features. We find that strategies that construct species trees using only topological information cluster together and that strategies that use additional non-topological information (e.g., branch lengths) also cluster together. Strategies that utilize more than one individual within a species to infer gene trees tend to produce estimates of species trees that contain clades present in trees estimated by other strategies. Strategies that use the minimize-deep-coalescences criterion to construct species trees tend to produce species tree estimates that contain clades that are not present in trees estimated by the Concatenation, RTC, SMRT, STAR, and STEAC methods, and that in general are more balanced than those inferred by these other strategies. Conclusions When constructing a species tree from a multilocus set of sequences, our observations provide a basis for interpreting differences in species tree estimates obtained via different approaches that have a two-stage structure in common, one step for gene tree estimation and a second step for species tree estimation. The methods explored here employ a number of distinct features of the data, and our analysis suggests that recovery of the same results from multiple methods that tend to differ in their patterns of inference can be a valuable tool for obtaining reliable estimates. PMID:24678701
DeGiorgio, Michael; Syring, John; Eckert, Andrew J; Liston, Aaron; Cronn, Richard; Neale, David B; Rosenberg, Noah A
2014-03-29
As it becomes increasingly possible to obtain DNA sequences of orthologous genes from diverse sets of taxa, species trees are frequently being inferred from multilocus data. However, the behavior of many methods for performing this inference has remained largely unexplored. Some methods have been proven to be consistent given certain evolutionary models, whereas others rely on criteria that, although appropriate for many parameter values, have peculiar zones of the parameter space in which they fail to converge on the correct estimate as data sets increase in size. Here, using North American pines, we empirically evaluate the behavior of 24 strategies for species tree inference using three alternative outgroups (72 strategies total). The data consist of 120 individuals sampled in eight ingroup species from subsection Strobus and three outgroup species from subsection Gerardianae, spanning ∼47 kilobases of sequence at 121 loci. Each "strategy" for inferring species trees consists of three features: a species tree construction method, a gene tree inference method, and a choice of outgroup. We use multivariate analysis techniques such as principal components analysis and hierarchical clustering to identify tree characteristics that are robustly observed across strategies, as well as to identify groups of strategies that produce trees with similar features. We find that strategies that construct species trees using only topological information cluster together and that strategies that use additional non-topological information (e.g., branch lengths) also cluster together. Strategies that utilize more than one individual within a species to infer gene trees tend to produce estimates of species trees that contain clades present in trees estimated by other strategies. Strategies that use the minimize-deep-coalescences criterion to construct species trees tend to produce species tree estimates that contain clades that are not present in trees estimated by the Concatenation, RTC, SMRT, STAR, and STEAC methods, and that in general are more balanced than those inferred by these other strategies. When constructing a species tree from a multilocus set of sequences, our observations provide a basis for interpreting differences in species tree estimates obtained via different approaches that have a two-stage structure in common, one step for gene tree estimation and a second step for species tree estimation. The methods explored here employ a number of distinct features of the data, and our analysis suggests that recovery of the same results from multiple methods that tend to differ in their patterns of inference can be a valuable tool for obtaining reliable estimates.
Landers, Mark N.
2013-01-01
The U.S. Geological Survey, in cooperation with the Gwinnett County Department of Water Resources, established a water-quality monitoring program during late 1996 to collect comprehensive, consistent, high-quality data for use by watershed managers. As of 2009, continuous streamflow and water-quality data as well as discrete water-quality samples were being collected for 14 watershed monitoring stations in Gwinnett County. This report provides statistical summaries of total suspended solids (TSS) concentrations for 730 stormflow and 710 base-flow water-quality samples collected between 1996 and 2009 for 14 watershed monitoring stations in Gwinnett County. Annual yields of TSS were estimated for each of the 14 watersheds using methods described in previous studies. TSS yield was estimated using linear, ordinary least-squares regression of TSS and explanatory variables of discharge, turbidity, season, date, and flow condition. The error of prediction for estimated yields ranged from 1 to 42 percent for the stations in this report; however, the actual overall uncertainty of the estimated yields cannot be less than that of the observed yields (± 15 to 20 percent). These watershed yields provide a basis for evaluation of how watershed characteristics, climate, and watershed management practices affect suspended sediment yield.
Balancing Score Adjusted Targeted Minimum Loss-based Estimation
Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.
2015-01-01
Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539
Moradi, Elaheh; Hallikainen, Ilona; Hänninen, Tuomo; Tohka, Jussi
2017-01-01
Rey's Auditory Verbal Learning Test (RAVLT) is a powerful neuropsychological tool for testing episodic memory, which is widely used for the cognitive assessment in dementia and pre-dementia conditions. Several studies have shown that an impairment in RAVLT scores reflect well the underlying pathology caused by Alzheimer's disease (AD), thus making RAVLT an effective early marker to detect AD in persons with memory complaints. We investigated the association between RAVLT scores (RAVLT Immediate and RAVLT Percent Forgetting) and the structural brain atrophy caused by AD. The aim was to comprehensively study to what extent the RAVLT scores are predictable based on structural magnetic resonance imaging (MRI) data using machine learning approaches as well as to find the most important brain regions for the estimation of RAVLT scores. For this, we built a predictive model to estimate RAVLT scores from gray matter density via elastic net penalized linear regression model. The proposed approach provided highly significant cross-validated correlation between the estimated and observed RAVLT Immediate (R = 0.50) and RAVLT Percent Forgetting (R = 0.43) in a dataset consisting of 806 AD, mild cognitive impairment (MCI) or healthy subjects. In addition, the selected machine learning method provided more accurate estimates of RAVLT scores than the relevance vector regression used earlier for the estimation of RAVLT based on MRI data. The top predictors were medial temporal lobe structures and amygdala for the estimation of RAVLT Immediate and angular gyrus, hippocampus and amygdala for the estimation of RAVLT Percent Forgetting. Further, the conversion of MCI subjects to AD in 3-years could be predicted based on either observed or estimated RAVLT scores with an accuracy comparable to MRI-based biomarkers.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.
The Nazca-South American convergence rate and the recurrence of the great 1960 Chilean earthquake
NASA Technical Reports Server (NTRS)
Stein, S.; Engeln, J. F.; Demets, C.; Gordon, R. G.; Woods, D.
1986-01-01
The seismic slip rate along the Chile Trench estimated from the slip in the great 1960 earthquake and the recurrence history of major earthquakes has been interpreted as consistent with the subduction rate of the Nazca plate beneath South America. The convergence rate, estimated from global relative plate motion models, depends significantly on closure of the Nazca - Antarctica - South America circuit. NUVEL-1, a new plate motion model which incorporates recently determined spreading rates on the Chile Rise, shows that the average convergence rate over the last three million years is slower than previously estimated. If this time-averaged convergence rate provides an appropriate upper bound for the seismic slip rate, either the characteristic Chilean subduction earthquake is smaller than the 1960 event, the average recurrence interval is greater than observed in the last 400 years, or both. These observations bear out the nonuniformity of plate motions on various time scales, the variability in characteristic subduction zone earthquake size, and the limitations of recurrence time estimates.
Kobau, R; Cui, W; Kadima, N; Zack, MM; Sajatovic, M; Kaiboriboon, K; Jobst, B
2015-01-01
Objective This study provides population-based estimates of psychosocial health among U.S. adults with epilepsy from the 2010 National Health Interview Survey. Methods Multinomial logistic regression was used to estimate the prevalence of the following measures of psychosocial health among adults with and those without epilepsy: 1) the Kessler-6 scale of Serious Psychological Distress; 2) cognitive limitation; the extent of impairments associated with psychological problems; and work limitation; 3) Social participation; and 4) the Patient Reported Outcome Measurement Information System Global Health scale. Results Compared with adults without epilepsy, adults with epilepsy, especially those with active epilepsy, reported significantly worse psychological health, more cognitive impairment, difficulty in participating in some social activities, and reduced health-related quality of life (HRQOL). Conclusions These disparities in psychosocial health in U.S. adults with epilepsy serve as baseline national estimates of their HRQOL, consistent with Healthy People 2020 national objectives on HRQOL. PMID:25305435
Robust w-Estimators for Cryo-EM Class Means
Huang, Chenxi; Tagare, Hemant D.
2016-01-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397
Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data
Hu, Jianhua; Wang, Peng; Qu, Annie
2014-01-01
Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433
Approximations useful for the prediction of electrostatic discharges for simple electrode geometries
NASA Technical Reports Server (NTRS)
Edmonds, L.
1986-01-01
The report provides approximations for estimating the capacitance and the ratio of electric field strength to potential for a certain class of electrode geometries. The geometry consists of an electrode near a grounded plane, with the electrode being a surface of revolution about the perpendicular to the plane. Some examples which show the accuracy of the capacitance estimate and the accuracy of the estimate of electric field over potential can be found in the appendix. When it is possible to estimate the potential of the electrode, knowing the ratio of electric field to potential will help to determine if an electrostatic discharge is likely to occur. Knowing the capacitance will help to determine the strength of the discharge (the energy released by it) if it does occur. A brief discussion of discharge mechanisms is given. The medium between the electrode and the grounded plane may be a neutral gas, a vacuum, or an unchanged homogeneous isotropic dielectric.
Williams, Colin F.; Reed, Marshall J.; Mariner, Robert H.
2008-01-01
The U. S. Geological Survey (USGS) is conducting an updated assessment of geothermal resources in the United States. The primary method applied in assessments of identified geothermal systems by the USGS and other organizations is the volume method, in which the recoverable heat is estimated from the thermal energy available in a reservoir. An important focus in the assessment project is on the development of geothermal resource models consistent with the production histories and observed characteristics of exploited geothermal fields. The new assessment will incorporate some changes in the models for temperature and depth ranges for electric power production, preferred chemical geothermometers for estimates of reservoir temperatures, estimates of reservoir volumes, and geothermal energy recovery factors. Monte Carlo simulations are used to characterize uncertainties in the estimates of electric power generation. These new models for the recovery of heat from heterogeneous, fractured reservoirs provide a physically realistic basis for evaluating the production potential of natural geothermal reservoirs.
Memory color assisted illuminant estimation through pixel clustering
NASA Astrophysics Data System (ADS)
Zhang, Heng; Quan, Shuxue
2010-01-01
The under constrained nature of illuminant estimation determines that in order to resolve the problem, certain assumptions are needed, such as the gray world theory. Including more constraints in this process may help explore the useful information in an image and improve the accuracy of the estimated illuminant, providing that the constraints hold. Based on the observation that most personal images have contents of one or more of the following categories: neutral objects, human beings, sky, and plants, we propose a method for illuminant estimation through the clustering of pixels of gray and three dominant memory colors: skin tone, sky blue, and foliage green. Analysis shows that samples of the above colors cluster around small areas under different illuminants and their characteristics can be used to effectively detect pixels falling into each of the categories. The algorithm requires the knowledge of the spectral sensitivity response of the camera, and a spectral database consisted of the CIE standard illuminants and reflectance or radiance database of samples of the above colors.
Is Jupiter's magnetosphere like a pulsar's or earth's?
NASA Technical Reports Server (NTRS)
Kennel, C. F.; Coroniti, F. V.
1974-01-01
The application of pulsar physics to determine the magnetic structure in the planet Jupiter outer magnetosphere is discussed. A variety of theoretical models are developed to illuminate broad areas of consistency and conflict between theory and experiment. Two possible models of Jupiter's magnetosphere, a pulsar-like radial outflow model and an earth-like convection model, are examined. A compilation of the simple order of magnitude estimates derivable from the various models is provided.
Adding source positions to the IVS Combination
NASA Astrophysics Data System (ADS)
Bachmann, S.; Thaller, D.
2016-12-01
Simultaneous estimation of source positions, Earth orientation parameters (EOPs) and station positions in one common adjustment is crucial for a consistent generation of celestial and terrestrial reference frame (CRF and TRF, respectively). VLBI is the only technique to guarantee this consistency. Previous publications showed that the VLBI intra-technique combination could improve the quality of the EOPs and station coordinates compared to the individual contributions. By now, the combination of EOP and station coordinates is well established within the IVS and in combination with other space geodetic techniques (e.g. inter-technique combined TRF like the ITRF). Most of the contributing IVS Analysis Centers (AC) now provide source positions as a third parameter type (besides EOP and station coordinates), which have not been used for an operational combined solution yet. A strategy for the combination of source positions has been developed and integrated into the routine IVS combination. Investigations are carried out to compare the source positions derived from different IVS ACs with the combined estimates to verify whether the source positions are improved by the combination, as it has been proven for EOP and station coordinates. Furthermore, global solutions of source positions, i.e., so-called catalogues describing a CRF, are generated consistently with the TRF similar to the IVS operational combined quarterly solution. The combined solutions of the source positions time series and the consistently generated TRF and CRF are compared internally to the individual solutions of the ACs as well as to external CRF catalogues and TRFs. Additionally, comparisons of EOPs based on different CRF solutions are presented as an outlook for consistent EOP, CRF and TRF realizations.
Equitability, mutual information, and the maximal information coefficient.
Kinney, Justin B; Atwal, Gurinder S
2014-03-04
How should one quantify the strength of association between two random variables without bias for relationships of a specific form? Despite its conceptual simplicity, this notion of statistical "equitability" has yet to receive a definitive mathematical formalization. Here we argue that equitability is properly formalized by a self-consistency condition closely related to Data Processing Inequality. Mutual information, a fundamental quantity in information theory, is shown to satisfy this equitability criterion. These findings are at odds with the recent work of Reshef et al. [Reshef DN, et al. (2011) Science 334(6062):1518-1524], which proposed an alternative definition of equitability and introduced a new statistic, the "maximal information coefficient" (MIC), said to satisfy equitability in contradistinction to mutual information. These conclusions, however, were supported only with limited simulation evidence, not with mathematical arguments. Upon revisiting these claims, we prove that the mathematical definition of equitability proposed by Reshef et al. cannot be satisfied by any (nontrivial) dependence measure. We also identify artifacts in the reported simulation evidence. When these artifacts are removed, estimates of mutual information are found to be more equitable than estimates of MIC. Mutual information is also observed to have consistently higher statistical power than MIC. We conclude that estimating mutual information provides a natural (and often practical) way to equitably quantify statistical associations in large datasets.
Visual field asymmetries in visual evoked responses
Hagler, Donald J.
2014-01-01
Behavioral responses to visual stimuli exhibit visual field asymmetries, but cortical folding and the close proximity of visual cortical areas make electrophysiological comparisons between different stimulus locations problematic. Retinotopy-constrained source estimation (RCSE) uses distributed dipole models simultaneously constrained by multiple stimulus locations to provide separation between individual visual areas that is not possible with conventional source estimation methods. Magnetoencephalography and RCSE were used to estimate time courses of activity in V1, V2, V3, and V3A. Responses to left and right hemifield stimuli were not significantly different. Peak latencies for peripheral stimuli were significantly shorter than those for perifoveal stimuli in V1, V2, and V3A, likely related to the greater proportion of magnocellular input to V1 in the periphery. Consistent with previous results, sensor magnitudes for lower field stimuli were about twice as large as for upper field, which is only partially explained by the proximity to sensors for lower field cortical sources in V1, V2, and V3. V3A exhibited both latency and amplitude differences for upper and lower field responses. There were no differences for V3, consistent with previous suggestions that dorsal and ventral V3 are two halves of a single visual area, rather than distinct areas V3 and VP. PMID:25527151
Population structures of Brazilian tall coconut (Cocos nucifera L.) by microsatellite markers
2010-01-01
Coconut palms of the Tall group were introduced to Brazil from the Cape Verde Islands in 1553. The present study sought to evaluate the genetic diversity among and within Brazilian Tall coconut populations. Samples were collected of 195 trees from 10 populations. Genetic diversity was accessed by investigating 13 simple sequence repeats (SSR) loci. This provided a total of 68 alleles, ranging from 2 to 13 alleles per locus, with an average of 5.23. The mean values of gene diversity (He ) and observed heterozygosity (Ho ) were 0.459 and 0.443, respectively. The genetic differentiation among populations was estimated at θ^P=0.1600and the estimated apparent outcrossing rate was ta = 0.92. Estimates of genetic distances between the populations varied from 0.034 to 0.390. Genetic distance and the corresponding clustering analysis indicate the formation of two groups. The first consists of the Baía Formosa, Georgino Avelino, and São José do Mipibu populations and the second consists of the Japoatã, Pacatuba, and Praia do Forte populations. The correlation matrix between genetic and geographic distances was positive and significant at a 1% probability. Taken together, our results suggest a spatial structuring of the genetic variability among the populations. Geographically closer populations exhibited greater similarities. PMID:21637579
NASA Astrophysics Data System (ADS)
Nord, Mark; Cafiero, Carlo; Viviani, Sara
2016-11-01
Statistical methods based on item response theory are applied to experiential food insecurity survey data from 147 countries, areas, and territories to assess data quality and develop methods to estimate national prevalence rates of moderate and severe food insecurity at equal levels of severity across countries. Data were collected from nationally representative samples of 1,000 adults in each country. A Rasch-model-based scale was estimated for each country, and data were assessed for consistency with model assumptions. A global reference scale was calculated based on item parameters from all countries. Each country's scale was adjusted to the global standard, allowing for up to 3 of the 8 scale items to be considered unique in that country if their deviance from the global standard exceeded a set tolerance. With very few exceptions, data from all countries were sufficiently consistent with model assumptions to constitute reasonably reliable measures of food insecurity and were adjustable to the global standard with fair confidence. National prevalence rates of moderate-or-severe food insecurity assessed over a 12-month recall period ranged from 3 percent to 92 percent. The correlations of national prevalence rates with national income, health, and well-being indicators provide external validation of the food security measure.
NASA Astrophysics Data System (ADS)
Williams, Westin B.; Michaels, Thomas E.; Michaels, Jennifer E.
2018-04-01
Composite materials used for aerospace applications are highly susceptible to impacts, which can result in barely visible delaminations. Reliable and fast detection of such damage is needed before structural failures occur. One approach is to use ultrasonic guided waves generated from a sparse array consisting of permanently mounted or embedded transducers for performing structural health monitoring. This array can detect introduction of damage after baseline subtraction, and also provide localization and characterization information via the minimum variance imaging algorithm. Imaging performance can vary considerably depending upon where damage is located with respect to the array; however, prior work has shown that knowledge of expected scattering can improve imaging consistency for artificial damage at various locations. In this study, anisotropic material attenuation and wave speed are estimated as a function of propagation angle using wavefield data recorded along radial lines at multiple angles with respect to an omnidirectional guided wave source. Additionally, full wavefield data are recorded before and after the introduction of artificial and impact damage so that wavefield baseline subtraction may be applied. 3-D filtering techniques are then used to reduce noise and isolate scattered waves. A model for estimating scattering of a circular defect is developed and scattering estimates for both artificial and impact damage are presented and compared.
Clark, S; Rose, D J
2001-04-01
To establish reliability estimates of the 75% Limits of Stability Test (75% LOS test) when administered to community-dwelling older adults with a history of falls. Generalizability theory was used to estimate both the relative contribution of identified error sources to the total measurement error and generalizability coefficients. A random effects repeated-measures analysis of variance (ANOVA) was used to assess consistency of LOS test movement variables across both days and targets. A motor control research laboratory in a university setting. Fifty community-dwelling older adults with 2 or more falls in the previous year. Spatial and temporal measures of dynamic balance derived from the 75% LOS test included average movement velocity, maximum center of gravity (COG) excursion, end-point COG excursion, and directional control. Estimated generalizability coefficients for 2 testing days ranged from.58 to.87. Total variance in LOS test measures attributable to inconsistencies in day-to-day test performance (Day and Subject x Day facets) ranged from 2.5% to 8.4%. The ANOVA results indicated that no significant differences were observed in the LOS test variables across the 2 testing days. The 75% LOS test administered to older adult fallers on 2 consecutive days provides consistent and reliable measures of dynamic balance.
Domino, Marisa Elena; Kilany, Mona; Wells, Rebecca; Morrissey, Joseph P
2017-10-01
To examine whether medical homes have heterogeneous effects in different subpopulations, leveraging the interpretations from a variety of statistical techniques. Secondary claims data from the NC Medicaid program for 2004-2007. The sample included all adults with diagnoses of schizophrenia, bipolar disorder, or major depression who were not dually enrolled in Medicare or in a nursing facility. We modeled a number of monthly service use, adherence, and expenditure outcomes using fixed effects, generalized estimating equation with and without inverse probability of treatment weights, and instrumental variables analyses. Data were received from the Carolina Cost and Quality Initiative. The four estimation techniques consistently revealed generally positive associations between medical homes and access to primary care, specialty mental health care, greater medication adherence, slightly lower emergency room use, and greater expenditures. These findings were consistent across all three major severe mental illness diagnostic groups. Some heterogeneity in effects were noted, especially in preventive screening. Expanding access to primary care-based medical homes for people with severe mental illness may not save money for insurance providers, due to greater access for important outpatient services with little cost offset. Health services research examining more of the treatment heterogeneity may contribute to more realistic projections about medical homes outcomes. © Health Research and Educational Trust.
Information-geometric measures as robust estimators of connection strengths and external inputs.
Tatsuno, Masami; Fellous, Jean-Marc; Amari, Shun-Ichi
2009-08-01
Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first- and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.
Development and initial validation of the internalization of Asian American stereotypes scale.
Shen, Frances C; Wang, Yu-Wei; Swanson, Jane L
2011-07-01
This research consists of four studies on the initial reliability and validity of the Internalization of Asian American Stereotypes Scale (IAASS), a self-report instrument that measures the degree Asian Americans have internalized racial stereotypes about their own group. The results from the exploratory and confirmatory factor analyses support a stable four-factor structure of the IAASS: Difficulties with English Language Communication, Pursuit of Prestigious Careers, Emotional Reservation, and Expected Academic Success. Evidence for concurrent and discriminant validity is presented. High internal-consistency and test-retest reliability estimates are reported. A discussion of how this scale can contribute to research and practice regarding internalized stereotyping among Asian Americans is provided.
Thermodynamically self-consistent theory for the Blume-Capel model.
Grollau, S; Kierlik, E; Rosinberg, M L; Tarjus, G
2001-04-01
We use a self-consistent Ornstein-Zernike approximation to study the Blume-Capel ferromagnet on three-dimensional lattices. The correlation functions and the thermodynamics are obtained from the solution of two coupled partial differential equations. The theory provides a comprehensive and accurate description of the phase diagram in all regions, including the wing boundaries in a nonzero magnetic field. In particular, the coordinates of the tricritical point are in very good agreement with the best estimates from simulation or series expansion. Numerical and analytical analysis strongly suggest that the theory predicts a universal Ising-like critical behavior along the lambda line and the wing critical lines, and a tricritical behavior governed by mean-field exponents.
DeWitt, Nancy T.; Flocks, James G.; Hansen, Mark; Kulp, Mark; Reynolds, B.J.
2007-01-01
The U.S. Geological Survey (USGS), in cooperation with the University of New Orleans (UNO) and the Louisiana Department of Natural Resources (LDNR), conducted a high-resolution, single-beam bathymetric survey along the Louisiana southern coastal zone from Belle Pass to Caminada Pass. The survey consisted of 483 line kilometers of data acquired in July and August of 2005. This report outlines the methodology and provides the data from the survey. Analysis of the data and comparison to a similar bathymetric survey completed in 1989 show significant loss of seafloor and shoreline retreat, which is consistent with previously published estimates of shoreline change in the study area.
Rainfall estimation for real time flood monitoring using geostationary meteorological satellite data
NASA Astrophysics Data System (ADS)
Veerakachen, Watcharee; Raksapatcharawong, Mongkol
2015-09-01
Rainfall estimation by geostationary meteorological satellite data provides good spatial and temporal resolutions. This is advantageous for real time flood monitoring and warning systems. However, a rainfall estimation algorithm developed in one region needs to be adjusted for another climatic region. This work proposes computationally-efficient rainfall estimation algorithms based on an Infrared Threshold Rainfall (ITR) method calibrated with regional ground truth. Hourly rain gauge data collected from 70 stations around the Chao-Phraya river basin were used for calibration and validation of the algorithms. The algorithm inputs were derived from FY-2E satellite observations consisting of infrared and water vapor imagery. The results were compared with the Global Satellite Mapping of Precipitation (GSMaP) near real time product (GSMaP_NRT) using the probability of detection (POD), root mean square error (RMSE) and linear correlation coefficient (CC) as performance indices. Comparison with the GSMaP_NRT product for real time monitoring purpose shows that hourly rain estimates from the proposed algorithm with the error adjustment technique (ITR_EA) offers higher POD and approximately the same RMSE and CC with less data latency.
Abundance models improve spatial and temporal prioritization of conservation resources.
Johnston, Alison; Fink, Daniel; Reynolds, Mark D; Hochachka, Wesley M; Sullivan, Brian L; Bruns, Nicholas E; Hallstein, Eric; Merrifield, Matt S; Matsumoto, Sandi; Kelling, Steve
2015-10-01
Conservation prioritization requires knowledge about organism distribution and density. This information is often inferred from models that estimate the probability of species occurrence rather than from models that estimate species abundance, because abundance data are harder to obtain and model. However, occurrence and abundance may not display similar patterns and therefore development of robust, scalable, abundance models is critical to ensuring that scarce conservation resources are applied where they can have the greatest benefits. Motivated by a dynamic land conservation program, we develop and assess a general method for modeling relative abundance using citizen science monitoring data. Weekly estimates of relative abundance and occurrence were compared for prioritizing times and locations of conservation actions for migratory waterbird species in California, USA. We found that abundance estimates consistently provided better rankings of observed counts than occurrence estimates. Additionally, the relationship between abundance and occurrence was nonlinear and varied by species and season. Across species, locations prioritized by occurrence models had only 10-58% overlap with locations prioritized by abundance models, highlighting that occurrence models will not typically identify the locations of highest abundance that are vital for conservation of populations.
Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao
2016-03-01
Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.
An estimate of the number of tropical tree species
Slik, J. W. Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F.; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L.; Bellingham, Peter J.; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q.; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L. M.; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K.; Chazdon, Robin L.; Clark, Connie; Clark, David B.; Clark, Deborah A.; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S.; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J.; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A. O.; Eisenlohr, Pedro V.; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J.; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T.; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M.; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A.; Joly, Carlos A.; de Jong, Bernardus H. J.; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L.; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F.; Lawes, Michael J.; do Amaral, Ieda Leao; Letcher, Susan G.; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H.; Meilby, Henrik; Melo, Felipe P. L.; Metcalfe, Daniel J.; Medjibe, Vincent P.; Metzger, Jean Paul; Millet, Jerome; Mohandass, D.; Montero, Juan C.; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T. F.; Pitman, Nigel C. A.; Poorter, Lourens; Poulsen, Axel D.; Poulsen, John; Powers, Jennifer; Prasad, Rama C.; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A.; Santos, Fernanda; Sarker, Swapan K.; Satdichanh, Manichanh; Schmitt, Christine B.; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S.; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I.-Fang; Sunderland, Terry; Suresh, H. S.; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W.; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L. C. H.; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A.; Webb, Campbell O.; Whitfeld, Timothy; Wich, Serge A.; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C. Yves; Yap, Sandra L.; Yoneda, Tsuyoshi; Zahawi, Rakan A.; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L.; Garcia Luize, Bruno; Venticinque, Eduardo M.
2015-01-01
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher’s alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼40,000 and ∼53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼19,000–25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼4,500–6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa. PMID:26034279
A method for estimating cost savings for population health management programs.
Murphy, Shannon M E; McGready, John; Griswold, Michael E; Sylvia, Martha L
2013-04-01
To develop a quasi-experimental method for estimating Population Health Management (PHM) program savings that mitigates common sources of confounding, supports regular updates for continued program monitoring, and estimates model precision. Administrative, program, and claims records from January 2005 through June 2009. Data are aggregated by member and month. Study participants include chronically ill adult commercial health plan members. The intervention group consists of members currently enrolled in PHM, stratified by intensity level. Comparison groups include (1) members never enrolled, and (2) PHM participants not currently enrolled. Mixed model smoothing is employed to regress monthly medical costs on time (in months), a history of PHM enrollment, and monthly program enrollment by intensity level. Comparison group trends are used to estimate expected costs for intervention members. Savings are realized when PHM participants' costs are lower than expected. This method mitigates many of the limitations faced using traditional pre-post models for estimating PHM savings in an observational setting, supports replication for ongoing monitoring, and performs basic statistical inference. This method provides payers with a confident basis for making investment decisions. © Health Research and Educational Trust.
Estimation of High-Dimensional Graphical Models Using Regularized Score Matching
Lin, Lina; Drton, Mathias; Shojaie, Ali
2017-01-01
Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498
NASA Astrophysics Data System (ADS)
Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.
2014-11-01
We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
The Role of Type and Source of Uncertainty on the Processing of Climate Models Projections.
Benjamin, Daniel M; Budescu, David V
2018-01-01
Scientists agree that the climate is changing due to human activities, but there is less agreement about the specific consequences and their timeline. Disagreement among climate projections is attributable to the complexity of climate models that differ in their structure, parameters, initial conditions, etc. We examine how different sources of uncertainty affect people's interpretation of, and reaction to, information about climate change by presenting participants forecasts from multiple experts. Participants viewed three types of sets of sea-level rise projections: (1) precise, but conflicting ; (2) imprecise , but agreeing, and (3) hybrid that were both conflicting and imprecise. They estimated the most likely sea-level rise, provided a range of possible values and rated the sets on several features - ambiguity, credibility, completeness, etc. In Study 1, everyone saw the same hybrid set. We found that participants were sensitive to uncertainty between sources, but not to uncertainty about which model was used. The impacts of conflict and imprecision were combined for estimation tasks and compromised for feature ratings . Estimates were closer to the experts' original projections, and sets were rated more favorably under imprecision. Estimates were least consistent with (narrower than) the experts in the hybrid condition, but participants rated the conflicting set least favorably. In Study 2, we investigated the hybrid case in more detail by creating several distinct interval sets that combine conflict and imprecision. Two factors drive perceptual differences: overlap - the structure of the forecast set (whether intersecting, nested, tangent, or disjoint) - and a symmetry - the balance of the set. Estimates were primarily driven by asymmetry, and preferences were primarily driven by overlap. Asymmetric sets were least consistent with the experts: estimated ranges were narrower, and estimates of the most likely value were shifted further below the set mean. Intersecting and nested sets were rated similarly to imprecision, and ratings of disjoint and tangent sets were rated like conflict. Our goal was to determine which underlying factors of information sets drive perceptions of uncertainty in consistent, predictable ways. The two studies lead us to conclude that perceptions of agreement require intersection and balance, and overly precise forecasts lead to greater perceptions of disagreement and a greater likelihood of the public discrediting and misinterpreting information.
The Role of Type and Source of Uncertainty on the Processing of Climate Models Projections
Benjamin, Daniel M.; Budescu, David V.
2018-01-01
Scientists agree that the climate is changing due to human activities, but there is less agreement about the specific consequences and their timeline. Disagreement among climate projections is attributable to the complexity of climate models that differ in their structure, parameters, initial conditions, etc. We examine how different sources of uncertainty affect people’s interpretation of, and reaction to, information about climate change by presenting participants forecasts from multiple experts. Participants viewed three types of sets of sea-level rise projections: (1) precise, but conflicting; (2) imprecise, but agreeing, and (3) hybrid that were both conflicting and imprecise. They estimated the most likely sea-level rise, provided a range of possible values and rated the sets on several features – ambiguity, credibility, completeness, etc. In Study 1, everyone saw the same hybrid set. We found that participants were sensitive to uncertainty between sources, but not to uncertainty about which model was used. The impacts of conflict and imprecision were combined for estimation tasks and compromised for feature ratings. Estimates were closer to the experts’ original projections, and sets were rated more favorably under imprecision. Estimates were least consistent with (narrower than) the experts in the hybrid condition, but participants rated the conflicting set least favorably. In Study 2, we investigated the hybrid case in more detail by creating several distinct interval sets that combine conflict and imprecision. Two factors drive perceptual differences: overlap – the structure of the forecast set (whether intersecting, nested, tangent, or disjoint) – and asymmetry – the balance of the set. Estimates were primarily driven by asymmetry, and preferences were primarily driven by overlap. Asymmetric sets were least consistent with the experts: estimated ranges were narrower, and estimates of the most likely value were shifted further below the set mean. Intersecting and nested sets were rated similarly to imprecision, and ratings of disjoint and tangent sets were rated like conflict. Our goal was to determine which underlying factors of information sets drive perceptions of uncertainty in consistent, predictable ways. The two studies lead us to conclude that perceptions of agreement require intersection and balance, and overly precise forecasts lead to greater perceptions of disagreement and a greater likelihood of the public discrediting and misinterpreting information. PMID:29636717
Bao, Yuhua; Duan, Naihua; Fox, Sarah A
2006-01-01
Research Objective To estimate the effect of provider advice in routine clinical contacts on patient smoking cessation outcome. Data Source The Sample Adult File from the 2001 National Health Interview Survey. We focus on adult patients who were either current smokers or quit during the last 12 months and had some contact with the health care providers or facilities they most often went to for acute or preventive care. Study Design We estimate a joint model of self-reported smoking cessation and ever receiving advice to quit during medical visits in the past 12 months. Because providers are more likely to advise heavier smokers and/or patients already diagnosed with smoking-related conditions, we use provider advice for diet/nutrition and for physical activity reported by the same patient as instrumental variables for smoking cessation advice to mitigate the selection bias. We conduct additional analyses to examine the robustness of our estimate against the various scenarios by which the exclusion restriction of the instrumental variables may fail. Principal Findings Provider advice doubles the chances of success in (self-reported) smoking cessation by their patients. The probability of quitting by the end of the 12-month reference period increased from 6.9 to 14.7 percent, an effect that is of both statistical (p<.001) and clinical significance. Conclusions Provider advice delivered in routine practice settings has a substantial effect on the success rate of smoking cessation among smoking patients. Providing advice consistently to all smoking patients, compared with routine care, is more effective than doubling the federal excise tax and, in the longer run, likely to outperform some of the other tobacco control policies such as banning smoking in private workplaces. PMID:17116112
Tropical forest plantation biomass estimation using RADARSAT-SAR and TM data of south china
NASA Astrophysics Data System (ADS)
Wang, Chenli; Niu, Zheng; Gu, Xiaoping; Guo, Zhixing; Cong, Pifu
2005-10-01
Forest biomass is one of the most important parameters for global carbon stock model yet can only be estimated with great uncertainties. Remote sensing, especially SAR data can offers the possibility of providing relatively accurate forest biomass estimations at a lower cost than inventory in study tropical forest. The goal of this research was to compare the sensitivity of forest biomass to Landsat TM and RADARSAT-SAR data and to assess the efficiency of NDVI, EVI and other vegetation indices in study forest biomass based on the field survey date and GIS in south china. Based on vegetation indices and factor analysis, multiple regression and neural networks were developed for biomass estimation for each species of the plantation. For each species, the better relationships between the biomass predicted and that measured from field survey was obtained with a neural network developed for the species. The relationship between predicted and measured biomass derived from vegetation indices differed between species. This study concludes that single band and many vegetation indices are weakly correlated with selected forest biomass. RADARSAT-SAR Backscatter coefficient has a relatively good logarithmic correlation with forest biomass, but neither TM spectral bands nor vegetation indices alone are sufficient to establish an efficient model for biomass estimation due to the saturation of bands and vegetation indices, multiple regression models that consist of spectral and environment variables improve biomass estimation performance. Comparing with TM, a relatively well estimation result can be achieved by RADARSAT-SAR, but all had limitations in tropical forest biomass estimation. The estimation results obtained are not accurate enough for forest management purposes at the forest stand level. However, the approximate volume estimates derived by the method can be useful in areas where no other forest information is available. Therefore, this paper provides a better understanding of relationships of remote sensing data and forest stand parameters used in forest parameter estimation models.
Dortel, Emmanuelle; Massiot-Granier, Félix; Rivot, Etienne; Million, Julien; Hallier, Jean-Pierre; Morize, Eric; Munaron, Jean-Marie; Bousquet, Nicolas; Chassot, Emmanuel
2013-01-01
Age estimates, typically determined by counting periodic growth increments in calcified structures of vertebrates, are the basis of population dynamics models used for managing exploited or threatened species. In fisheries research, the use of otolith growth rings as an indicator of fish age has increased considerably in recent decades. However, otolith readings include various sources of uncertainty. Current ageing methods, which converts an average count of rings into age, only provide periodic age estimates in which the range of uncertainty is fully ignored. In this study, we describe a hierarchical model for estimating individual ages from repeated otolith readings. The model was developed within a Bayesian framework to explicitly represent the sources of uncertainty associated with age estimation, to allow for individual variations and to include knowledge on parameters from expertise. The performance of the proposed model was examined through simulations, and then it was coupled to a two-stanza somatic growth model to evaluate the impact of the age estimation method on the age composition of commercial fisheries catches. We illustrate our approach using the saggital otoliths of yellowfin tuna of the Indian Ocean collected through large-scale mark-recapture experiments. The simulation performance suggested that the ageing error model was able to estimate the ageing biases and provide accurate age estimates, regardless of the age of the fish. Coupled with the growth model, this approach appeared suitable for modeling the growth of Indian Ocean yellowfin and is consistent with findings of previous studies. The simulations showed that the choice of the ageing method can strongly affect growth estimates with subsequent implications for age-structured data used as inputs for population models. Finally, our modeling approach revealed particularly useful to reflect uncertainty around age estimates into the process of growth estimation and it can be applied to any study relying on age estimation. PMID:23637773
NASA Astrophysics Data System (ADS)
Wood, W. W.; Wood, W. W.
2001-05-01
Evaluation of ground-water supply in arid areas requires estimation of annual recharge. Traditional physical-based hydrologic estimates of ground-water recharge result in large uncertainties when applied in arid, mountainous environments because of infrequent, intense rainfall events, destruction of water-measuring structures associated with those events, and consequent short periods of hydrologic records. To avoid these problems and reduce the uncertainty of recharge estimates, a chloride mass-balance (CMB) approach was used to provide a time-integrated estimate. Seven basins exhibiting dry-stream beds (wadis) in the Asir and Hijaz Mountains, western Saudi Arabia, were selected to evaluate the method. Precipitation among the basins ranged from less than 70 mm/y to nearly 320 mm/y. Rain collected from 35 locations in these basins averaged 2.0 mg/L chloride. Ground water from 140 locations in the wadi alluvium averaged 200 mg/L chloride. This chloride concentration ratio of precipitation to ground water suggests that on average, approximately 1 percent of the rainfall is recharged, while the remainder is lost to evaporation. Ground-water recharge from precipitation in individual basins ranged from less than 1 to nearly 4 percent and was directly proportional to total precipitation. Independent calculations of recharge using Darcy's Law were consistent with these findings and are within the range typically found in other arid areas of the world. Development of ground water has lowered the water level beneath the wadis and provided more storage thus minimizing chloride loss from the basin by river discharge. Any loss of chloride from the basin results in an overestimate of the recharge flux by the chloride-mass balance approach. In well-constrained systems recharge in arid, mountainous areas where the mass of chloride entering and leaving the basin is known or can be reasonably estimated, the CMB approach provides a rapid, inexpensive method for estimating time-integrated ground-water recharge.
NASA Astrophysics Data System (ADS)
Pinker, R. T.; Ma, Y.; Nussbaumer, E. A.
2012-04-01
The overall goal of the MEaSUREs activity titled: "Developing Consistent Earth System Data Records for the Global Terrestrial Water Cycle" is to develop consistent, long-term Earth System Data Records (ESDRs) for the major components of the terrestrial water cycle at a climatic time scale. The shortwave (SW) and longwave (LW) radiative fluxes at the Earth's surface determine the exchange of energy between the land and the atmosphere are the focus of this presentation. During the last two decades, significant progress has been made in assessing the Earth Radiation Balance from satellite observations. Yet, satellite based estimates differ from each other and long term satellite observations at global scale are not readily available. There is a need to utilize existing records of satellite observations and to improve currently available estimates. This paper reports on improvements introduced to an existing methodology to estimate shortwave (SW) radiative fluxes within the atmospheric system, on the development of a new inference scheme for deriving LW fluxes, the implementation of the approach with the ISCCP DX observations and improved atmospheric inputs for the period of 1983-2007, evaluation against ground observations, and comparison with independent satellite methods and numerical models. The resulting ESDRs from the entire MEaSUREs Project are intended to provide a consistent basis for estimating the mean state and variability of the land surface water cycle at a spatial scale relevant to major global river basins. MEaSUREs Project "Developing Consistent Earth System Data Records for the Global Terrestrial Water Cycle" Team Members: E. F. Wood (PI)1, T. J Bohn2, J. L Bytheway3, X. Feng4, H. Gao2, P. R.Houser4 (CO-I), C. D Kummerow3 (CO-I), D. P Lettenmaier2 (CO-I), C. Li5, Y. Ma5, R. F MacCracken4, M. Pan1, R. T Pinker5 (CO-I), A. K. Sahoo1, J. Sheffield1 1. Dept of CEE, Princeton University, Princeton, NJ, USA. 2. Dept of CEE, University of Washington, Seattle, WA, USA. 3. Dept of Atmospheric Science, Fort Collins, CO, USA. 4. Dept of Geography and GeoInformation Scie., George Mason University, Fairfax, VA, USA. 5. Dept of Meteorology, University of Maryland, College Park, MD, USA.
Coello Pérez, Eduardo A.; Papenbrock, Thomas F.
2015-07-27
In this paper, we present a model-independent approach to electric quadrupole transitions of deformed nuclei. Based on an effective theory for axially symmetric systems, the leading interactions with electromagnetic fields enter as minimal couplings to gauge potentials, while subleading corrections employ gauge-invariant nonminimal couplings. This approach yields transition operators that are consistent with the Hamiltonian, and the power counting of the effective theory provides us with theoretical uncertainty estimates. We successfully test the effective theory in homonuclear molecules that exhibit a large separation of scales. For ground-state band transitions of rotational nuclei, the effective theory describes data well within theoreticalmore » uncertainties at leading order. To probe the theory at subleading order, data with higher precision would be valuable. For transitional nuclei, next-to-leading-order calculations and the high-precision data are consistent within the theoretical uncertainty estimates. In addition, we study the faint interband transitions within the effective theory and focus on the E2 transitions from the 0 2 + band (the “β band”) to the ground-state band. Here the predictions from the effective theory are consistent with data for several nuclei, thereby proposing a solution to a long-standing challenge.« less
Patil, Prasad; Peng, Roger D; Leek, Jeffrey T
2016-07-01
A recent study of the replicability of key psychological findings is a major contribution toward understanding the human side of the scientific process. Despite the careful and nuanced analysis reported, the simple narrative disseminated by the mass, social, and scientific media was that in only 36% of the studies were the original results replicated. In the current study, however, we showed that 77% of the replication effect sizes reported were within a 95% prediction interval calculated using the original effect size. Our analysis suggests two critical issues in understanding replication of psychological studies. First, researchers' intuitive expectations for what a replication should show do not always match with statistical estimates of replication. Second, when the results of original studies are very imprecise, they create wide prediction intervals-and a broad range of replication effects that are consistent with the original estimates. This may lead to effects that replicate successfully, in that replication results are consistent with statistical expectations, but do not provide much information about the size (or existence) of the true effect. In this light, the results of the Reproducibility Project: Psychology can be viewed as statistically consistent with what one might expect when performing a large-scale replication experiment. © The Author(s) 2016.
Primary care and behavioral health practice size: the challenge for health care reform.
Bauer, Mark S; Leader, Deane; Un, Hyong; Lai, Zongshan; Kilbourne, Amy M
2012-10-01
We investigated the size profile of US primary care and behavioral health physician practices since size may impact the ability to institute care management processes (CMPs) that can enhance care quality. We utilized 2009 claims data from a nationwide commercial insurer to estimate practice size by linking providers by tax identification number. We determined the proportion of primary care physicians, psychiatrists, and behavioral health providers practicing in venues of >20 providers per practice (the lower bound for current CMP practice surveys). Among primary care physicians (n=350,350), only 2.1% of practices consisted of >20 providers. Among behavioral health practitioners (n=146,992) and psychiatrists (n=44,449), 1.3% and 1.0% of practices, respectively, had >20 providers. Sensitivity analysis excluding single-physician practices as "secondary" confirmed findings, with primary care and psychiatrist practices of >20 providers comprising, respectively, only 19.4% and 8.8% of practices (difference: P<0.0001). In secondary analyses, bipolar disorder was used as a tracer condition to estimate practice census for a high-complexity, high-cost behavioral health condition; only 1.3-18 patients per practice had claims for this condition. The tax identification number method for estimating practice size has strengths and limitations that complement those of survey methods. The proportion of practices below the lower bound of prior CMP studies is substantial, and care models and policies will need to address the needs of such practices and their patients. Achieving a critical mass of patients for disorder-specific CMPs will require coordination across multiple small practices.
Efficiently estimating salmon escapement uncertainty using systematically sampled data
Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.
2007-01-01
Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.
PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins
Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude
2015-01-01
Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose ‘PockDrug-Server’ to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. PMID:25956651
PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.
Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude
2015-07-01
Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Basic concepts in three-part quantitative assessments of undiscovered mineral resources
Singer, D.A.
1993-01-01
Since 1975, mineral resource assessments have been made for over 27 areas covering 5??106 km2 at various scales using what is now called the three-part form of quantitative assessment. In these assessments, (1) areas are delineated according to the types of deposits permitted by the geology,(2) the amount of metal and some ore characteristics are estimated using grade and tonnage models, and (3) the number of undiscovered deposits of each type is estimated. Permissive boundaries are drawn for one or more deposit types such that the probability of a deposit lying outside the boundary is negligible, that is, less than 1 in 100,000 to 1,000,000. Grade and tonnage models combined with estimates of the number of deposits are the fundamental means of translating geologists' resource assessments into a language that economists can use. Estimates of the number of deposits explicitly represent the probability (or degree of belief) that some fixed but unknown number of undiscovered deposits exist in the delineated tracts. Estimates are by deposit type and must be consistent with the grade and tonnage model. Other guidelines for these estimates include (1) frequency of deposits from well-explored areas, (2) local deposit extrapolations, (3) counting and assigning probabilities to anomalies and occurrences, (4) process constraints, (5) relative frequencies of related deposit types, and (6) area spatial limits. In most cases, estimates are made subjectively, as they are in meteorology, gambling, and geologic interpretations. In three-part assessments, the estimates are internally consistent because delineated tracts are consistent with descriptive models, grade and tonnage models are consistent with descriptive models, as well as with known deposits in the area, and estimates of number of deposits are consistent with grade and tonnage models. All available information is used in the assessment, and uncertainty is explicitly represented. ?? 1993 Oxford University Press.
Linden, Ariel; Yarnold, Paul R
2016-12-01
Program evaluations often utilize various matching approaches to emulate the randomization process for group assignment in experimental studies. Typically, the matching strategy is implemented, and then covariate balance is assessed before estimating treatment effects. This paper introduces a novel analytic framework utilizing a machine learning algorithm called optimal discriminant analysis (ODA) for assessing covariate balance and estimating treatment effects, once the matching strategy has been implemented. This framework holds several key advantages over the conventional approach: application to any variable metric and number of groups; insensitivity to skewed data or outliers; and use of accuracy measures applicable to all prognostic analyses. Moreover, ODA accepts analytic weights, thereby extending the methodology to any study design where weights are used for covariate adjustment or more precise (differential) outcome measurement. One-to-one matching on the propensity score was used as the matching strategy. Covariate balance was assessed using standardized difference in means (conventional approach) and measures of classification accuracy (ODA). Treatment effects were estimated using ordinary least squares regression and ODA. Using empirical data, ODA produced results highly consistent with those obtained via the conventional methodology for assessing covariate balance and estimating treatment effects. When ODA is combined with matching techniques within a treatment effects framework, the results are consistent with conventional approaches. However, given that it provides additional dimensions and robustness to the analysis versus what can currently be achieved using conventional approaches, ODA offers an appealing alternative. © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.
2018-06-01
Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hooper, Cornelia M.; Stevens, Tim J.; Saukkonen, Anna
Measuring changes in protein or organelle abundance in the cell is an essential, but challenging aspect of cell biology. Frequently-used methods for determining organelle abundance typically rely on detection of a very few marker proteins, so are unsatisfactory. In silico estimates of protein abundances from publicly available protein spectra can provide useful standard abundance values but contain only data from tissue proteomes, and are not coupled to organelle localization data. A new protein abundance score, the normalized protein abundance scale (NPAS), expands on the number of scored proteins and the scoring accuracy of lower-abundance proteins in Arabidopsis. NPAS was combinedmore » with subcellular protein localization data, facilitating quantitative estimations of organelle abundance during routine experimental procedures. A suite of targeted proteomics markers for subcellular compartment markers was developed, enabling independent verification of in silico estimates for relative organelle abundance. Estimation of relative organelle abundance was found to be reproducible and consistent over a range of tissues and growth conditions. In silico abundance estimations and localization data have been combined into an online tool, multiple marker abundance profiling, available in the SUBA4 toolbox (http://suba.live).« less
Hooper, Cornelia M.; Stevens, Tim J.; Saukkonen, Anna; ...
2017-10-12
Measuring changes in protein or organelle abundance in the cell is an essential, but challenging aspect of cell biology. Frequently-used methods for determining organelle abundance typically rely on detection of a very few marker proteins, so are unsatisfactory. In silico estimates of protein abundances from publicly available protein spectra can provide useful standard abundance values but contain only data from tissue proteomes, and are not coupled to organelle localization data. A new protein abundance score, the normalized protein abundance scale (NPAS), expands on the number of scored proteins and the scoring accuracy of lower-abundance proteins in Arabidopsis. NPAS was combinedmore » with subcellular protein localization data, facilitating quantitative estimations of organelle abundance during routine experimental procedures. A suite of targeted proteomics markers for subcellular compartment markers was developed, enabling independent verification of in silico estimates for relative organelle abundance. Estimation of relative organelle abundance was found to be reproducible and consistent over a range of tissues and growth conditions. In silico abundance estimations and localization data have been combined into an online tool, multiple marker abundance profiling, available in the SUBA4 toolbox (http://suba.live).« less
A global estimate of the full oceanic 13C Suess effect since the preindustrial
NASA Astrophysics Data System (ADS)
Eide, Marie; Olsen, Are; Ninnemann, Ulysses S.; Eldevik, Tor
2017-03-01
We present the first estimate of the full global ocean 13C Suess effect since preindustrial times, based on observations. This has been derived by first using the method of Olsen and Ninnemann (2010) to calculate 13C Suess effect estimates on sections spanning the world ocean, which were next mapped on a global 1° × 1° grid. We find a strong 13C Suess effect in the upper 1000 m of all basins, with strongest decrease in the subtropical gyres of the Northern Hemisphere, where δ13C of dissolved inorganic carbon has decreased by more than 0.8‰ since the industrial revolution. At greater depths, a significant 13C Suess effect can only be detected in the northern parts of the North Atlantic Ocean. The relationship between the 13C Suess effect and the concentration of anthropogenic carbon varies strongly between water masses, reflecting the degree to which source waters are equilibrated with the atmospheric 13C Suess effect before sinking. Finally, we estimate a global ocean inventory of anthropogenic CO2 of 92 ± 46 Gt C. This provides an estimate that is almost independent of and consistent, within the uncertainties, with previous estimates.
Winter bird population studies and project prairie birds for surveying grassland birds
Twedt, D.J.; Hamel, P.B.; Woodrey, M.S.
2008-01-01
We compared 2 survey methods for assessing winter bird communities in temperate grasslands: Winter Bird Population Study surveys are area-searches that have long been used in a variety of habitats whereas Project Prairie Bird surveys employ active-flushing techniques on strip-transects and are intended for use in grasslands. We used both methods to survey birds on 14 herbaceous reforested sites and 9 coastal pine savannas during winter and compared resultant estimates of species richness and relative abundance. These techniques did not yield similar estimates of avian populations. We found Winter Bird Population Studies consistently produced higher estimates of species richness, whereas Project Prairie Birds produced higher estimates of avian abundance for some species. When it is important to identify all species within the winter bird community, Winter Bird Population Studies should be the survey method of choice. If estimates of the abundance of relatively secretive grassland bird species are desired, the use of Project Prairie Birds protocols is warranted. However, we suggest that both survey techniques, as currently employed, are deficient and recommend distance- based survey methods that provide species-specific estimates of detection probabilities be incorporated into these survey methods.