Sample records for estimation approach based

  1. Two Approaches to Estimation of Classification Accuracy Rate under Item Response Theory

    ERIC Educational Resources Information Center

    Lathrop, Quinn N.; Cheng, Ying

    2013-01-01

    Within the framework of item response theory (IRT), there are two recent lines of work on the estimation of classification accuracy (CA) rate. One approach estimates CA when decisions are made based on total sum scores, the other based on latent trait estimates. The former is referred to as the Lee approach, and the latter, the Rudner approach,…

  2. Distributed Damage Estimation for Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil

    2011-01-01

    Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.

  3. Investigating the Importance of the Pocket-estimation Method in Pocket-based Approaches: An Illustration Using Pocket-ligand Classification.

    PubMed

    Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie

    2017-09-01

    Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A Comparison of Turbidity-Based and Streamflow-Based Estimates of Suspended-Sediment Concentrations in Three Chesapeake Bay Tributaries

    USGS Publications Warehouse

    Jastram, John D.; Moyer, Douglas; Hyer, Kenneth

    2009-01-01

    Fluvial transport of sediment into the Chesapeake Bay estuary is a persistent water-quality issue with major implications for the overall health of the bay ecosystem. Accurately and precisely estimating the suspended-sediment concentrations (SSC) and loads that are delivered to the bay, however, remains challenging. Although manual sampling of SSC produces an accurate series of point-in-time measurements, robust extrapolation to unmeasured periods (especially highflow periods) has proven to be difficult. Sediment concentrations typically have been estimated using regression relations between individual SSC values and associated streamflow values; however, suspended-sediment transport during storm events is extremely variable, and it is often difficult to relate a unique SSC to a given streamflow. With this limitation for estimating SSC, innovative approaches for generating detailed records of suspended-sediment transport are needed. One effective method for improved suspended-sediment determination involves the continuous monitoring of turbidity as a surrogate for SSC. Turbidity measurements are theoretically well correlated to SSC because turbidity represents a measure of water clarity that is directly influenced by suspended sediments; thus, turbidity-based estimation models typically are effective tools for generating SSC data. The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency Chesapeake Bay Program and Virginia Department of Environmental Quality, initiated continuous turbidity monitoring on three major tributaries of the bay - the James, Rappahannock, and North Fork Shenandoah Rivers - to evaluate the use of turbidity as a sediment surrogate in rivers that deliver sediment to the bay. Results of this surrogate approach were compared to the traditionally applied streamflow-based approach for estimating SSC. Additionally, evaluation and comparison of these two approaches were conducted for nutrient estimations. Results demonstrate that the application of turbidity-based estimation models provides an improved method for generating a continuous record of SSC, relative to the classical approach that uses streamflow as a surrogate for SSC. Turbidity-based estimates of SSC were found to be more accurate and precise than SSC estimates from streamflow-based approaches. The turbidity-based SSC estimation models explained 92 to 98 percent of the variability in SSC, while streamflow-based models explained 74 to 88 percent of the variability in SSC. Furthermore, the mean absolute error of turbidity-based SSC estimates was 50 to 87 percent less than the corresponding values from the streamflow-based models. Statistically significant differences were detected between the distributions of residual errors and estimates from the two approaches, indicating that the turbidity-based approach yields estimates of SSC with greater precision than the streamflow-based approach. Similar improvements were identified for turbidity-based estimates of total phosphorus, which is strongly related to turbidity because total phosphorus occurs predominantly in particulate form. Total nitrogen estimation models based on turbidity and streamflow generated estimates of similar quality, with the turbidity-based models providing slight improvements in the quality of estimations. This result is attributed to the understanding that nitrogen transport is dominated by dissolved forms that relate less directly to streamflow and turbidity. Improvements in concentration estimation resulted in improved estimates of load. Turbidity-based suspended-sediment loads estimated for the James River at Cartersville, VA, monitoring station exhibited tighter confidence interval bounds and a coefficient of variation of 12 percent, compared with a coefficient of variation of 38 percent for the streamflow-based load.

  5. Fuzzy multinomial logistic regression analysis: A multi-objective programming approach

    NASA Astrophysics Data System (ADS)

    Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan

    2017-05-01

    Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.

  6. Evaluating a Pivot-Based Approach for Bilingual Lexicon Extraction

    PubMed Central

    Kim, Jae-Hoon; Kwon, Hong-Seok; Seo, Hyeong-Won

    2015-01-01

    A pivot-based approach for bilingual lexicon extraction is based on the similarity of context vectors represented by words in a pivot language like English. In this paper, in order to show validity and usability of the pivot-based approach, we evaluate the approach in company with two different methods for estimating context vectors: one estimates them from two parallel corpora based on word association between source words (resp., target words) and pivot words and the other estimates them from two parallel corpora based on word alignment tools for statistical machine translation. Empirical results on two language pairs (e.g., Korean-Spanish and Korean-French) have shown that the pivot-based approach is very promising for resource-poor languages and this approach observes its validity and usability. Furthermore, for words with low frequency, our method is also well performed. PMID:25983745

  7. A non-stationary cost-benefit analysis approach for extreme flood estimation to explore the nexus of 'Risk, Cost and Non-stationarity'

    NASA Astrophysics Data System (ADS)

    Qi, Wei

    2017-11-01

    Cost-benefit analysis is commonly used for engineering planning and design problems in practice. However, previous cost-benefit based design flood estimation is based on stationary assumption. This study develops a non-stationary cost-benefit based design flood estimation approach. This approach integrates a non-stationary probability distribution function into cost-benefit analysis, and influence of non-stationarity on expected total cost (including flood damage and construction costs) and design flood estimation can be quantified. To facilitate design flood selections, a 'Risk-Cost' analysis approach is developed, which reveals the nexus of extreme flood risk, expected total cost and design life periods. Two basins, with 54-year and 104-year flood data respectively, are utilized to illustrate the application. It is found that the developed approach can effectively reveal changes of expected total cost and extreme floods in different design life periods. In addition, trade-offs are found between extreme flood risk and expected total cost, which reflect increases in cost to mitigate risk. Comparing with stationary approaches which generate only one expected total cost curve and therefore only one design flood estimation, the proposed new approach generate design flood estimation intervals and the 'Risk-Cost' approach selects a design flood value from the intervals based on the trade-offs between extreme flood risk and expected total cost. This study provides a new approach towards a better understanding of the influence of non-stationarity on expected total cost and design floods, and could be beneficial to cost-benefit based non-stationary design flood estimation across the world.

  8. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    PubMed

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  9. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  10. Statistical inference for remote sensing-based estimates of net deforestation

    Treesearch

    Ronald E. McRoberts; Brian F. Walters

    2012-01-01

    Statistical inference requires expression of an estimate in probabilistic terms, usually in the form of a confidence interval. An approach to constructing confidence intervals for remote sensing-based estimates of net deforestation is illustrated. The approach is based on post-classification methods using two independent forest/non-forest classifications because...

  11. A robust vision-based sensor fusion approach for real-time pose estimation.

    PubMed

    Assa, Akbar; Janabi-Sharifi, Farrokh

    2014-02-01

    Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.

  12. Comprehensive analysis of proton range uncertainties related to stopping-power-ratio estimation using dual-energy CT imaging

    NASA Astrophysics Data System (ADS)

    Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.

    2017-09-01

    The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.

  13. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  14. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    PubMed

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  15. Estimating Precipitation Input to a Watershed by Combining Gauge and Radar Derived Observations

    NASA Astrophysics Data System (ADS)

    Ercan, M. B.; Goodall, J. L.

    2011-12-01

    One challenge in creating an accurate watershed model is obtaining estimates of precipitation intensity over the watershed area. While precipitation measurements are generally available from gauging stations and radar instruments, both of these approaches for measuring precipitation have strengths and weakness. A typical way of addressing this challenge is to use gauged precipitation estimates to calibrate radar based estimates, however this study proposes a slightly different approach in which the optimal daily precipitation value is selected from either the gauged or the radar estimates based on the observed streamflow for that day. Our proposed approach is perhaps most relevant for cases of modeling watersheds that do not have a nearby precipitation gauge, or for regions that experience convective storms that are often highly spatially variable. Using the Eno River watershed located in Orange County, NC, three different precipitation datasets were created to predict streamflow at the watershed outlet for the time period 2005-2010 using the Soil and Water Assessment Tool (SWAT): (1) estimates based on only precipitation gauging stations, (2) estimates based only on gauged-corrected radar observations, and (3) the combination of precipitation estimates from the gauge and radar data determined using our proposed approach. The results show that the combined precipitation approach significantly improves streamflow predictions (Nash-Sutcliffe Coefficient, E = 0.66) when compared to the gauged estimates alone (E = 0.47) and the radar based estimates alone (E = 0.45). Our study was limited to one watershed, therefore additional studies are needed to control for factors such as climate, ecology, and hydrogeology that will likely influence the results of the analysis.

  16. Benefit-based tree valuation

    Treesearch

    E.G. McPherson

    2007-01-01

    Benefit-based tree valuation provides alternative estimates of the fair and reasonable value of trees while illustrating the relative contribution of different benefit types. This study compared estimates of tree value obtained using cost- and benefit-based approaches. The cost-based approach used the Council of Landscape and Tree Appraisers trunk formula method, and...

  17. A Framework of Combining Case-Based Reasoning with a Work Breakdown Structure for Estimating the Cost of Online Course Production Projects

    ERIC Educational Resources Information Center

    He, Wu

    2014-01-01

    Currently, a work breakdown structure (WBS) approach is used as the most common cost estimation approach for online course production projects. To improve the practice of cost estimation, this paper proposes a novel framework to estimate the cost for online course production projects using a case-based reasoning (CBR) technique and a WBS. A…

  18. Estimating the Regional Economic Significance of Airports

    DTIC Science & Technology

    1992-09-01

    following three options for estimating induced impacts: the economic base model , an econometric model , and a regional input-output model . One approach to...limitations, however, the economic base model has been widely used for regional economic analysis. A second approach is to develop an econometric model of...analysis is the principal statistical tool used to estimate the economic relationships. Regional econometric models are capable of estimating a single

  19. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    NASA Astrophysics Data System (ADS)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  20. Resolution of Forces and Strain Measurements from an Acoustic Ground Test

    NASA Technical Reports Server (NTRS)

    Smith, Andrew M.; LaVerde, Bruce T.; Hunt, Ronald; Waldon, James M.

    2013-01-01

    The Conservatism in Typical Vibration Tests was Demonstrated: Vibration test at component level produced conservative force reactions by approximately a factor of 4 (approx.12 dB) as compared to the integrated acoustic test in 2 out of 3 axes. Reaction Forces Estimated at the Base of Equipment Using a Finite Element Based Method were Validated: FEM based estimate of interface forces may be adequate to guide development of vibration test criteria with less conservatism. Element Forces Estimated in Secondary Structure Struts were Validated: Finite element approach provided best estimate of axial strut forces in frequency range below 200 Hz where a rigid lumped mass assumption for the entire electronics box was valid. Models with enough fidelity to represent diminishing apparent mass of equipment are better suited for estimating force reactions across the frequency range. Forward Work: Demonstrate the reduction in conservatism provided by; Current force limited approach and an FEM guided approach. Validate proposed CMS approach to estimate coupled response from uncoupled system characteristics for vibroacoustics.

  1. A review on prognostics approaches for remaining useful life of lithium-ion battery

    NASA Astrophysics Data System (ADS)

    Su, C.; Chen, H. J.

    2017-11-01

    Lithium-ion (Li-ion) battery is a core component for various industrial systems, including satellite, spacecraft and electric vehicle, etc. The mechanism of performance degradation and remaining useful life (RUL) estimation correlate closely to the operating state and reliability of the aforementioned systems. Furthermore, RUL prediction of Li-ion battery is crucial for the operation scheduling, spare parts management and maintenance decision for such kinds of systems. In recent years, performance degradation prognostics and RUL estimation approaches have become a focus of the research concerning with Li-ion battery. This paper summarizes the approaches used in Li-ion battery RUL estimation. Three categories are classified accordingly, i.e. model-based approach, data-based approach and hybrid approach. The key issues and future trends for battery RUL estimation are also discussed.

  2. Hedonic approaches based on spatial econometrics and spatial statistics: application to evaluation of project benefits

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Morito; Seya, Hajime

    2009-12-01

    This study discusses the theoretical foundation of the application of spatial hedonic approaches—the hedonic approach employing spatial econometrics or/and spatial statistics—to benefits evaluation. The study highlights the limitations of the spatial econometrics approach since it uses a spatial weight matrix that is not employed by the spatial statistics approach. Further, the study presents empirical analyses by applying the Spatial Autoregressive Error Model (SAEM), which is based on the spatial econometrics approach, and the Spatial Process Model (SPM), which is based on the spatial statistics approach. SPMs are conducted based on both isotropy and anisotropy and applied to different mesh sizes. The empirical analysis reveals that the estimated benefits are quite different, especially between isotropic and anisotropic SPM and between isotropic SPM and SAEM; the estimated benefits are similar for SAEM and anisotropic SPM. The study demonstrates that the mesh size does not affect the estimated amount of benefits. Finally, the study provides a confidence interval for the estimated benefits and raises an issue with regard to benefit evaluation.

  3. A Novel Continuous Blood Pressure Estimation Approach Based on Data Mining Techniques.

    PubMed

    Miao, Fen; Fu, Nan; Zhang, Yuan-Ting; Ding, Xiao-Rong; Hong, Xi; He, Qingyun; Li, Ye

    2017-11-01

    Continuous blood pressure (BP) estimation using pulse transit time (PTT) is a promising method for unobtrusive BP measurement. However, the accuracy of this approach must be improved for it to be viable for a wide range of applications. This study proposes a novel continuous BP estimation approach that combines data mining techniques with a traditional mechanism-driven model. First, 14 features derived from simultaneous electrocardiogram and photoplethysmogram signals were extracted for beat-to-beat BP estimation. A genetic algorithm-based feature selection method was then used to select BP indicators for each subject. Multivariate linear regression and support vector regression were employed to develop the BP model. The accuracy and robustness of the proposed approach were validated for static, dynamic, and follow-up performance. Experimental results based on 73 subjects showed that the proposed approach exhibited excellent accuracy in static BP estimation, with a correlation coefficient and mean error of 0.852 and -0.001 ± 3.102 mmHg for systolic BP, and 0.790 and -0.004 ± 2.199 mmHg for diastolic BP. Similar performance was observed for dynamic BP estimation. The robustness results indicated that the estimation accuracy was lower by a certain degree one day after model construction but was relatively stable from one day to six months after construction. The proposed approach is superior to the state-of-the-art PTT-based model for an approximately 2-mmHg reduction in the standard derivation at different time intervals, thus providing potentially novel insights for cuffless BP estimation.

  4. Harmonizing estimates of forest land area from national-level forest inventory and satellite imagery

    Treesearch

    Bonnie Ruefenacht; Mark D. Nelson; Mark Finco

    2009-01-01

    Estimates of forest land area are derived both from national-level forest inventories and satellite image-based map products. These estimates can differ substantially within subregional extents (e.g., states or provinces) primarily due to differences in definitions of forest land between inventory- and image-based approaches. We present a geospatial modeling approach...

  5. An Evidence-Based Approach to Estimating the National and State Costs of PreK-3rd. FCD Policy Brief Advancing PK-3rd. No.10

    ERIC Educational Resources Information Center

    Picus, Lawrence O.; Odden, Allan; Goetz, Michael

    2009-01-01

    This study estimates the costs of providing a high-quality PreK-3rd education approach in all 50 states plus the District of Columbia. Relying on an Evidence-Based approach to school finance adequacy, it identifies the staffing resources needed to offer high-quality integrated PreK-3rd programs and then estimates the costs of those resources. By…

  6. Reconciling estimates of the contemporary North American carbon balance among terrestrial biosphere models, atmospheric inversions, and a new approach for estimating net ecosystem exchange from inventory-based data

    Treesearch

    Daniel J. Hayes; David P. Turner; Graham Stinson; A. David Mcguire; Yaxing Wei; Tristram O. West; Linda S. Heath; Bernardus Dejong; Brian G. McConkey; Richard A. Birdsey; Werner A. Kurz; Andrew R. Jacobson; Deborah N. Huntzinger; Yude Pan; W. Mac Post; Robert B. Cook

    2012-01-01

    We develop an approach for estimating net ecosystem exchange (NEE) using inventory-based information over North America (NA) for a recent 7-year period (ca. 2000-2006). The approach notably retains information on the spatial distribution of NEE, or the vertical exchange between land and atmosphere of all non-fossil fuel sources and sinks of CO2,...

  7. Evaluation of Modeling Schemes to Estimate Evapotranspiration and Root Zone Soil Water Content over Vineyard using a Scintillometer and Remotely Sensed Surface Energy Balance

    NASA Astrophysics Data System (ADS)

    Geli, H. M. E.; Gonzalez-Piqueras, J.; Isidro, C., Sr.

    2016-12-01

    Actual crop evapotranspiration (ETa) and root zone soil water content (SMC) are key operational variable to monitor water consumption and water stress condition for improve vineyard grapes productivity and quality. This analysis, evaluates the estimation of ETa and SMC based on two modeling approaches. The first approach is a hybrid model that couples a thermal-based two source energy balance (TSEB) model (Norman et al. 1995) and water balance model to estimate the two variable (Geli 2012). The second approach is based on Large Aperture Scintillometer (LAS)-based estimates of sensible heat flux. The LAS-based estimates of sensible heat fluxes were used to calculate latent heat flux as the residual of surface energy balance equation on hourly basis which was converted to daily ETa. The calculated ETa from the scintillometer was then couple with the water balance approach to provide updated ETa_LAS and SMC_LAS. Both estimates of ETa and SMC based on LAS (i.e. ETa_LAS and SMC_LAS) and TSEB (ETa_TSEB and SMC_TSEB) were compared with ground-based observation from eddy covariance and soil water content measurements at multiple depths. The study site is an irrigated vineyard located in Central Spain Primary with heterogeneous surface conditions in term of irrigation practices and the ground based observation over the vineyard were collected during the summer of 2007. Preliminary results of the inter-comparison of the two approaches suggests relatively good between both modeling approaches and ground-based observations with RMSE lower than 1.2 mm/day for ETa and lower than 20% for SMC. References Norman, J. M., Kustas, W. P., & Humes, K. S. (1995). A two-source approach for estimating soil and vegetation energy fluxes in observations of directional radiometric surface temperature. Agricultural and Forest Meteorology, 77, 263293. Geli, Hatim M. E. (2012). Modeling spatial surface energy fluxes of agricultural and riparian vegetation using remote sensing, Ph. D. dissertation, Department of Civil and Environmental Engineering, Utah State University.

  8. Acceleration estimation using a single GPS receiver for airborne scalar gravimetry

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaohong; Zheng, Kai; Lu, Cuixian; Wan, Jiakuan; Liu, Zhanke; Ren, Xiaodong

    2017-11-01

    Kinematic acceleration estimated using the global positioning system (GPS) is significant for airborne scalar gravimetry. As the conventional approach based on the differential global positioning system (DGPS) presents several drawbacks, including additional cost or the impracticality of setting up nearby base stations in challenging environments, we introduce an alternative approach, Modified Kin-VADASE (MKin-VADASE), based on a modified Kin-VADASE approach without the requirement to have ground-base stations. In this approach, the aircraft velocities are first estimated with the modified Kin-VADASE. Then the accelerations are obtained from velocity estimates using the Taylor approximation differentiator. The impact of carrier-phase measurement noise and satellite ephemeris errors on acceleration estimates are investigated carefully in the frequency domain with the Fast Fourier Transform Algorithm (FFT). The results show that the satellite clock products have a significant impact on the acceleration estimates. Then, the performance of MKin-VADASE, PPP, and DGPS are validated using flight tests carried out in Shanxi Province, China. The accelerations are estimated using the three approaches, then used to calculate the gravity disturbances. Finally, the analysis of crossover difference and the terrestrial gravity data are used to evaluate the accuracy of gravity disturbance estimates. The results show that the performances of MKin-VADASE, PPP and DGPS are comparable, but the computational complexity of MKin-VADASE is greatly reduced with regard to PPP and DGPS. For the results of the three approaches, the RMS of crossover differences of gravity disturbance estimates is approximately 1-1.5 mGal at a spatial resolution of 3.5 km (half wavelength) after crossover adjustment, and the accuracy is approximately 3-4 mGal with respect to terrestrial gravity data.

  9. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography.

    PubMed

    Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A

    2013-11-01

    Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  10. Methodology for Estimating Total Automotive Manufacturing Costs

    DOT National Transportation Integrated Search

    1983-04-01

    A number of methodologies for estimating manufacturing costs have been developed. This report discusses the different approaches and shows that an approach to estimating manufacturing costs in the automobile industry based on surrogate plants is pref...

  11. Comparing top-down and bottom-up costing approaches for economic evaluation within social welfare.

    PubMed

    Olsson, Tina M

    2011-10-01

    This study compares two approaches to the estimation of social welfare intervention costs: one "top-down" and the other "bottom-up" for a group of social welfare clients with severe problem behavior participating in a randomized trial. Intervention costs ranging over a two-year period were compared by intervention category (foster care placement, institutional placement, mentorship services, individual support services and structured support services), estimation method (price, micro costing, average cost) and treatment group (intervention, control). Analyses are based upon 2007 costs for 156 individuals receiving 404 interventions. Overall, both approaches were found to produce reliable estimates of intervention costs at the group level but not at the individual level. As choice of approach can greatly impact the estimate of mean difference, adjustment based on estimation approach should be incorporated into sensitivity analyses. Analysts must take care in assessing the purpose and perspective of the analysis when choosing a costing approach for use within economic evaluation.

  12. Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements

    NASA Astrophysics Data System (ADS)

    Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga

    The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.

  13. On the role of modeling choices in estimation of cerebral aneurysm wall tension.

    PubMed

    Ramachandran, Manasi; Laakso, Aki; Harbaugh, Robert E; Raghavan, Madhavan L

    2012-11-15

    To assess various approaches to estimating pressure-induced wall tension in intracranial aneurysms (IA) and their effect on the stratification of subjects in a study population. Three-dimensional models of 26 IAs (9 ruptured and 17 unruptured) were segmented from Computed Tomography Angiography (CTA) images. Wall tension distributions in these patient-specific geometric models were estimated based on various approaches such as differences in morphological detail utilized or modeling choices made. For all subjects in the study population, the peak wall tension was estimated using all investigated approaches and were compared to a reference approach-nonlinear finite element (FE) analysis using the Fung anisotropic model with regionally varying material fiber directions. Comparisons between approaches were focused toward assessing the similarity in stratification of IAs within the population based on peak wall tension. The stratification of IAs tension deviated to some extent from the reference approach as less geometric detail was incorporated. Interestingly, the size of the cerebral aneurysm as captured by a single size measure was the predominant determinant of peak wall tension-based stratification. Within FE approaches, simplifications to isotropy, material linearity and geometric linearity caused a gradual deviation from the reference estimates, but it was minimal and resulted in little to no impact on stratifications of IAs. Differences in modeling choices made without patient-specificity in parameters of such models had little impact on tension-based IA stratification in this population. Increasing morphological detail did impact the estimated peak wall tension, but size was the predominant determinant. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Evaluations of carbon fluxes estimated by top-down and bottom-up approaches

    NASA Astrophysics Data System (ADS)

    Murakami, K.; Sasai, T.; Kato, S.; Hiraki, K.; Maksyutov, S. S.; Yokota, T.; Nasahara, K.; Matsunaga, T.

    2013-12-01

    There are two types of estimating carbon fluxes using satellite observation data, and these are referred to as top-down and bottom-up approaches. Many uncertainties are however still remain in these carbon flux estimations, because the true values of carbon flux are still unclear and estimations vary according to the type of the model (e.g. a transport model, a process based model) and input data. The CO2 fluxes in these approaches are estimated by using different satellite data such as the distribution of CO2 concentration in the top-down approach and the land cover information (e.g. leaf area, surface temperature) in the bottom-up approach. The satellite-based CO2 flux estimations with reduced uncertainty can be used efficiently for identifications of large emission area and carbon stocks of forest area. In this study, we evaluated the carbon flux estimates from two approaches by comparing with each other. The Greenhouse gases Observing SATellite (GOSAT) has been observing atmospheric CO2 concentrations since 2009. GOSAT L4A data product is the monthly CO2 flux estimations for 64 sub-continental regions and is estimated by using GOSAT FTS SWIR L2 XCO2 data and atmospheric tracer transport model. We used GOSAT L4A CO2 flux as top-down approach estimations and net ecosystem productions (NEP) estimated by the diagnostic type biosphere model BEAMS as bottom-up approach estimations. BEAMS NEP is only natural land CO2 flux, so we used GOSAT L4A CO2 flux after subtraction of anthropogenic CO2 emissions and oceanic CO2 flux. We compared with two approach in temperate north-east Asia region. This region is covered by grassland and crop land (about 60 %), forest (about 20 %) and bare ground (about 20 %). The temporal variation for one year period was indicated similar trends between two approaches. Furthermore we show the comparison of CO2 flux estimations in other sub-continental regions.

  15. Estimating trace-suspect match probabilities for singleton Y-STR haplotypes using coalescent theory.

    PubMed

    Andersen, Mikkel Meyer; Caliebe, Amke; Jochens, Arne; Willuweit, Sascha; Krawczak, Michael

    2013-02-01

    Estimation of match probabilities for singleton haplotypes of lineage markers, i.e. for haplotypes observed only once in a reference database augmented by a suspect profile, is an important problem in forensic genetics. We compared the performance of four estimators of singleton match probabilities for Y-STRs, namely the count estimate, both with and without Brenner's so-called 'kappa correction', the surveying estimate, and a previously proposed, but rarely used, coalescent-based approach implemented in the BATWING software. Extensive simulation with BATWING of the underlying population history, haplotype evolution and subsequent database sampling revealed that the coalescent-based approach is characterized by lower bias and lower mean squared error than the uncorrected count estimator and the surveying estimator. Moreover, in contrast to the two count estimators, both the surveying and the coalescent-based approach exhibited a good correlation between the estimated and true match probabilities. However, although its overall performance is thus better than that of any other recognized method, the coalescent-based estimator is still computation-intense on the verge of general impracticability. Its application in forensic practice therefore will have to be limited to small reference databases, or to isolated cases of particular interest, until more powerful algorithms for coalescent simulation have become available. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Investigation on the fiber based approach to estimate the axial load carrying capacity of the circular concrete filled steel tube (CFST)

    NASA Astrophysics Data System (ADS)

    Piscesa, B.; Attard, M. M.; Suprobo, P.; Samani, A. K.

    2017-11-01

    External confining devices are often used to enhance the strength and ductility of reinforced concrete columns. Among the available external confining devices, steel tube is one of the most widely used in construction. However, steel tube has some drawbacks such as local buckling which needs to be considered when estimating the axial load carrying capacity of the concrete-filled-steel-tube (CFST) column. To tackle this problem in design, Eurocode 4 provided guidelines to estimate the effective yield strength of the steel tube material. To study the behavior of CFST column, in this paper, a non-linear analysis using a fiber-based approach was conducted. The use of the fiber-based approach allows the engineers to predict not only the axial load carrying capacity but also the complete load-deformation curve of the CFST columns for a known confining pressure. In the proposed fiber-based approach, an inverse analysis is used to estimate the constant confining pressure similar to design-oriented models. This paper also presents comparisons between the fiber-based approach model with the experimental results and the 3D non-linear finite element analysis.

  17. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    NASA Astrophysics Data System (ADS)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  18. Hubble Space Telescope Angular Velocity Estimation During the Robotic Servicing Mission

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Queen, Steven Z.; VanEepoel, John M.; Sanner, Robert M.

    2005-01-01

    During the Hubble Robotic Servicing Mission, the Hubble Space Telescope (HST) attitude and rates are necessary to achieve the capture of HST by the Hubble Robotic Vehicle (HRV). The attitude and rates must be determined without the HST gyros or HST attitude estimates. The HRV will be equipped with vision-based sensors, capable of estimating the relative attitude between HST and HRV. The HST attitude is derived from the measured relative attitude and the HRV computed inertial attitude. However, the relative rate between HST and HRV cannot be measured directly. Therefore, the HST rate with respect to inertial space is not known. Two approaches are developed to estimate the HST rates. Both methods utilize the measured relative attitude and the HRV inertial attitude and rates. First, a nonlinear estimator is developed. The nonlinear approach estimates the HST rate through an estimation of the inertial angular momentum. Second, a linearized approach is developed. The linearized approach is based on more traditional Extended Kalman filter techniques. Simulation test results for both methods are given.

  19. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  20. Estimating daily climatologies for climate indices derived from climate model data and observations

    PubMed Central

    Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof

    2015-01-01

    Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192

  1. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2015-03-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  2. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2014-11-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  3. An Empirical Comparison of Heterogeneity Variance Estimators in 12,894 Meta-Analyses

    ERIC Educational Resources Information Center

    Langan, Dean; Higgins, Julian P. T.; Simmonds, Mark

    2015-01-01

    Heterogeneity in meta-analysis is most commonly estimated using a moment-based approach described by DerSimonian and Laird. However, this method has been shown to produce biased estimates. Alternative methods to estimate heterogeneity include the restricted maximum likelihood approach and those proposed by Paule and Mandel, Sidik and Jonkman, and…

  4. A new methodological approach to adjust alcohol exposure distributions to improve the estimation of alcohol-attributable fractions.

    PubMed

    Parish, William J; Aldridge, Arnie; Allaire, Benjamin; Ekwueme, Donatus U; Poehler, Diana; Guy, Gery P; Thomas, Cheryll C; Trogdon, Justin G

    2017-11-01

    To assess the burden of excessive alcohol use, researchers estimate alcohol-attributable fractions (AAFs) routinely. However, under-reporting in survey data can bias these estimates. We present an approach that adjusts for under-reporting in the estimation of AAFs, particularly within subgroups. This framework is a refinement of a previous method conducted by Rehm et al. We use a measurement error model to derive the 'true' alcohol distribution from a 'reported' alcohol distribution. The 'true' distribution leverages per-capita sales data to identify the distribution average and then identifies the shape of the distribution with self-reported survey data. Data are from the National Alcohol Survey (NAS), the National Household Survey on Drug Abuse (NHSDA) and the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). We compared our approach with previous approaches by estimating the AAF of female breast cancer cases. Compared with Rehm et al.'s approach, our refinement performs similarly under a gamma assumption. For example, among females aged 18-25 years, the two approaches produce estimates from NHSDA that are within a percentage point. However, relaxing the gamma assumption generally produces more conservative evidence. For example, among females aged 18-25 years, estimates from NHSDA based on the best-fitting distribution are only 19.33% of breast cancer cases, which is a much smaller proportion than the gamma-based estimates of approximately 28%. A refinement of Rehm et al.'s approach to adjusting for underreporting in the estimation of alcohol-attributable fractions provides more flexibility. This flexibility can avoid biases associated with failing to account for the underlying differences in alcohol consumption patterns across different study populations. Comparisons of our refinement with Rehm et al.'s approach show that results are similar when a gamma distribution is assumed. However, results are appreciably lower when the best-fitting distribution is chosen versus gamma-based results. © 2017 Society for the Study of Addiction.

  5. The AMSR2 Satellite-based Microwave Snow Algorithm (SMSA) to estimate regional to global snow depth and snow water equivalent

    NASA Astrophysics Data System (ADS)

    Kelly, R. E. J.; Saberi, N.; Li, Q.

    2017-12-01

    With moderate to high spatial resolution (<1 km) regional to global snow water equivalent (SWE) observation approaches yet to be fully scoped and developed, the long-term satellite passive microwave record remains an important tool for cryosphere-climate diagnostics. A new satellite microwave remote sensing approach is described for estimating snow depth (SD) and snow water equivalent (SWE). The algorithm, called the Satellite-based Microwave Snow Algorithm (SMSA), uses Advanced Microwave Scanning Radiometer - 2 (AMSR2) observations aboard the Global Change Observation Mission - Water mission launched by the Japan Aerospace Exploration Agency in 2012. The approach is unique since it leverages observed brightness temperatures (Tb) with static ancillary data to parameterize a physically-based retrieval without requiring parameter constraints from in situ snow depth observations or historical snow depth climatology. After screening snow from non-snow surface targets (water bodies [including freeze/thaw state], rainfall, high altitude plateau regions [e.g. Tibetan plateau]), moderate and shallow snow depths are estimated by minimizing the difference between Dense Media Radiative Transfer model estimates (Tsang et al., 2000; Picard et al., 2011) and AMSR2 Tb observations to retrieve SWE and SD. Parameterization of the model combines a parsimonious snow grain size and density approach originally developed by Kelly et al. (2003). Evaluation of the SMSA performance is achieved using in situ snow depth data from a variety of standard and experiment data sources. Results presented from winter seasons 2012-13 to 2016-17 illustrate the improved performance of the new approach in comparison with the baseline AMSR2 algorithm estimates and approach the performance of the model assimilation-based approach of GlobSnow. Given the variation in estimation power of SWE by different land surface/climate models and selected satellite-derived passive microwave approaches, SMSA provides SWE estimates that are independent of real or near real-time in situ and model data.

  6. A Nonlinear Regression Model Estimating Single Source Concentrations of Primary and Secondarily Formed 2.5

    EPA Science Inventory

    Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...

  7. Consistency of extreme flood estimation approaches

    NASA Astrophysics Data System (ADS)

    Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

    2017-04-01

    Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

  8. Comparison of 3 estimation methods of mycophenolic acid AUC based on a limited sampling strategy in renal transplant patients.

    PubMed

    Hulin, Anne; Blanchet, Benoît; Audard, Vincent; Barau, Caroline; Furlan, Valérie; Durrbach, Antoine; Taïeb, Fabrice; Lang, Philippe; Grimbert, Philippe; Tod, Michel

    2009-04-01

    A significant relationship between mycophenolic acid (MPA) area under the plasma concentration-time curve (AUC) and the risk for rejection has been reported. Based on 3 concentration measurements, 3 approaches have been proposed for the estimation of MPA AUC, involving either a multilinear regression approach model (MLRA) or a Bayesian estimation using either gamma absorption or zero-order absorption population models. The aim of the study was to compare the 3 approaches for the estimation of MPA AUC in 150 renal transplant patients treated with mycophenolate mofetil and tacrolimus. The population parameters were determined in 77 patients (learning study). The AUC estimation methods were compared in the learning population and in 73 patients from another center (validation study). In the latter study, the reference AUCs were estimated by the trapezoidal rule on 8 measurements. MPA concentrations were measured by liquid chromatography. The gamma absorption model gave the best fit. In the learning study, the AUCs estimated by both Bayesian methods were very similar, whereas the multilinear approach was highly correlated but yielded estimates about 20% lower than Bayesian methods. This resulted in dosing recommendations differing by 250 mg/12 h or more in 27% of cases. In the validation study, AUC estimates based on the Bayesian method with gamma absorption model and multilinear regression approach model were, respectively, 12% higher and 7% lower than the reference values. To conclude, the bicompartmental model with gamma absorption rate gave the best fit. The 3 AUC estimation methods are highly correlated but not concordant. For a given patient, the same estimation method should always be used.

  9. New learning based super-resolution: use of DWT and IGMRF prior.

    PubMed

    Gajjar, Prakash P; Joshi, Manjunath V

    2010-05-01

    In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.

  10. The 10-year Absolute Risk of Cardiovascular (CV) Events in Northern Iran: a Population Based Study

    PubMed Central

    Motamed, Nima; Mardanshahi, Alireza; Saravi, Benyamin Mohseni; Siamian, Hasan; Maadi, Mansooreh; Zamani, Farhad

    2015-01-01

    Background: The present study was conducted to estimate 10-year cardiovascular disease events (CVD) risk using three instruments in northern Iran. Material and methods: Baseline data of 3201 participants 40-79 of a population based cohort which was conducted in Northern Iran were analyzed. Framingham risk score (FRS), World Health Organization (WHO) risk prediction charts and American college of cardiovascular / American heart association (ACC/AHA) tool were applied to assess 10-year CVD events risk. The agreement values between the risk assessment instruments were determined using the kappa statistics. Results: Our study estimated 53.5%of male population aged 40-79 had a 10 –year risk of CVD events≥10% based on ACC/AHA approach, 48.9% based on FRS and 11.8% based on WHO risk charts. A 10 –year risk≥10% was estimated among 20.1% of women using the ACC/AHA approach, 11.9%using FRS and 5.7%using WHO tool. ACC/AHA and Framingham tools had closest agreement in the estimation of 10-year risk≥10% (κ=0.7757) in meanwhile ACC/AHA and WHO approaches displayed highest agreement (κ=0.6123) in women. Conclusion: Different estimations of 10-year risk of CVD event were provided by ACC/AHA, FRS and WHO approaches. PMID:26236160

  11. Disentangling sampling and ecological explanations underlying species-area relationships

    USGS Publications Warehouse

    Cam, E.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Alpizar-Jara, R.; Flather, C.H.

    2002-01-01

    We used a probabilistic approach to address the influence of sampling artifacts on the form of species-area relationships (SARs). We developed a model in which the increase in observed species richness is a function of sampling effort exclusively. We assumed that effort depends on area sampled, and we generated species-area curves under that model. These curves can be realistic looking. We then generated SARs from avian data, comparing SARs based on counts with those based on richness estimates. We used an approach to estimation of species richness that accounts for species detection probability and, hence, for variation in sampling effort. The slopes of SARs based on counts are steeper than those of curves based on estimates of richness, indicating that the former partly reflect failure to account for species detection probability. SARs based on estimates reflect ecological processes exclusively, not sampling processes. This approach permits investigation of ecologically relevant hypotheses. The slope of SARs is not influenced by the slope of the relationship between habitat diversity and area. In situations in which not all of the species are detected during sampling sessions, approaches to estimation of species richness integrating species detection probability should be used to investigate the rate of increase in species richness with area.

  12. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle

    PubMed Central

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-01-01

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489

  13. An "Ensemble Approach" to Modernizing Extreme Precipitation Estimation for Dam Safety Decision-Making

    NASA Astrophysics Data System (ADS)

    Cifelli, R.; Mahoney, K. M.; Webb, R. S.; McCormick, B.

    2017-12-01

    To ensure structural and operational safety of dams and other water management infrastructure, water resources managers and engineers require information about the potential for heavy precipitation. The methods and data used to estimate extreme rainfall amounts for managing risk are based on 40-year-old science and in need of improvement. The need to evaluate new approaches based on the best science available has led the states of Colorado and New Mexico to engage a body of scientists and engineers in an innovative "ensemble approach" to updating extreme precipitation estimates. NOAA is at the forefront of one of three technical approaches that make up the "ensemble study"; the three approaches are conducted concurrently and in collaboration with each other. One approach is the conventional deterministic, "storm-based" method, another is a risk-based regional precipitation frequency estimation tool, and the third is an experimental approach utilizing NOAA's state-of-the-art High Resolution Rapid Refresh (HRRR) physically-based dynamical weather prediction model. The goal of the overall project is to use the individual strengths of these different methods to define an updated and broadly acceptable state of the practice for evaluation and design of dam spillways. This talk will highlight the NOAA research and NOAA's role in the overarching goal to better understand and characterizing extreme precipitation estimation uncertainty. The research led by NOAA explores a novel high-resolution dataset and post-processing techniques using a super-ensemble of hourly forecasts from the HRRR model. We also investigate how this rich dataset may be combined with statistical methods to optimally cast the data in probabilistic frameworks. NOAA expertise in the physical processes that drive extreme precipitation is also employed to develop careful testing and improved understanding of the limitations of older estimation methods and assumptions. The process of decision making in the midst of uncertainty is a major part of this study. We will speak to how the ensemble approach may be used in concert with one another to manage risk and enhance resiliency in the midst of uncertainty. Finally, the presentation will also address the implications of including climate change in future extreme precipitation estimation studies.

  14. A terrain-based site characterization map of California with implications for the contiguous United States

    USGS Publications Warehouse

    Yong, Alan K.; Hough, Susan E.; Iwahashi, Junko; Braverman, Amy

    2012-01-01

    We present an approach based on geomorphometry to predict material properties and characterize site conditions using the VS30 parameter (time‐averaged shear‐wave velocity to a depth of 30 m). Our framework consists of an automated terrain classification scheme based on taxonomic criteria (slope gradient, local convexity, and surface texture) that systematically identifies 16 terrain types from 1‐km spatial resolution (30 arcsec) Shuttle Radar Topography Mission digital elevation models (SRTM DEMs). Using 853 VS30 values from California, we apply a simulation‐based statistical method to determine the mean VS30 for each terrain type in California. We then compare the VS30 values with models based on individual proxies, such as mapped surface geology and topographic slope, and show that our systematic terrain‐based approach consistently performs better than semiempirical estimates based on individual proxies. To further evaluate our model, we apply our California‐based estimates to terrains of the contiguous United States. Comparisons of our estimates with 325 VS30 measurements outside of California, as well as estimates based on the topographic slope model, indicate our method to be statistically robust and more accurate. Our approach thus provides an objective and robust method for extending estimates of VS30 for regions where in situ measurements are sparse or not readily available.

  15. A review of single-sample-based models and other approaches for radiocarbon dating of dissolved inorganic carbon in groundwater

    USGS Publications Warehouse

    Han, L. F; Plummer, Niel

    2016-01-01

    Numerous methods have been proposed to estimate the pre-nuclear-detonation 14C content of dissolved inorganic carbon (DIC) recharged to groundwater that has been corrected/adjusted for geochemical processes in the absence of radioactive decay (14C0) - a quantity that is essential for estimation of radiocarbon age of DIC in groundwater. The models/approaches most commonly used are grouped as follows: (1) single-sample-based models, (2) a statistical approach based on the observed (curved) relationship between 14C and δ13C data for the aquifer, and (3) the geochemical mass-balance approach that constructs adjustment models accounting for all the geochemical reactions known to occur along a groundwater flow path. This review discusses first the geochemical processes behind each of the single-sample-based models, followed by discussions of the statistical approach and the geochemical mass-balance approach. Finally, the applications, advantages and limitations of the three groups of models/approaches are discussed.The single-sample-based models constitute the prevailing use of 14C data in hydrogeology and hydrological studies. This is in part because the models are applied to an individual water sample to estimate the 14C age, therefore the measurement data are easily available. These models have been shown to provide realistic radiocarbon ages in many studies. However, they usually are limited to simple carbonate aquifers and selection of model may have significant effects on 14C0 often resulting in a wide range of estimates of 14C ages.Of the single-sample-based models, four are recommended for the estimation of 14C0 of DIC in groundwater: Pearson's model, (Ingerson and Pearson, 1964; Pearson and White, 1967), Han & Plummer's model (Han and Plummer, 2013), the IAEA model (Gonfiantini, 1972; Salem et al., 1980), and Oeschger's model (Geyh, 2000). These four models include all processes considered in single-sample-based models, and can be used in different ranges of 13C values.In contrast to the single-sample-based models, the extended Gonfiantini & Zuppi model (Gonfiantini and Zuppi, 2003; Han et al., 2014) is a statistical approach. This approach can be used to estimate 14C ages when a curved relationship between the 14C and 13C values of the DIC data is observed. In addition to estimation of groundwater ages, the relationship between 14C and δ13C data can be used to interpret hydrogeological characteristics of the aquifer, e.g. estimating apparent rates of geochemical reactions and revealing the complexity of the geochemical environment, and identify samples that are not affected by the same set of reactions/processes as the rest of the dataset. The investigated water samples may have a wide range of ages, and for waters with very low values of 14C, the model based on statistics may give more reliable age estimates than those obtained from single-sample-based models. In the extended Gonfiantini & Zuppi model, a representative system-wide value of the initial 14C content is derived from the 14C and δ13C data of DIC and can differ from that used in single-sample-based models. Therefore, the extended Gonfiantini & Zuppi model usually avoids the effect of modern water components which might retain ‘bomb’ pulse signatures.The geochemical mass-balance approach constructs an adjustment model that accounts for all the geochemical reactions known to occur along an aquifer flow path (Plummer et al., 1983; Wigley et al., 1978; Plummer et al., 1994; Plummer and Glynn, 2013), and includes, in addition to DIC, dissolved organic carbon (DOC) and methane (CH4). If sufficient chemical, mineralogical and isotopic data are available, the geochemical mass-balance method can yield the most accurate estimates of the adjusted radiocarbon age. The main limitation of this approach is that complete information is necessary on chemical, mineralogical and isotopic data and these data are often limited.Failure to recognize the limitations and underlying assumptions on which the various models and approaches are based can result in a wide range of estimates of 14C0 and limit the usefulness of radiocarbon as a dating tool for groundwater. In each of the three generalized approaches (single-sample-based models, statistical approach, and geochemical mass-balance approach), successful application depends on scrutiny of the isotopic (14C and 13C) and chemical data to conceptualize the reactions and processes that affect the 14C content of DIC in aquifers. The recently developed graphical analysis method is shown to aid in determining which approach is most appropriate for the isotopic and chemical data from a groundwater system.

  16. Estimation of Attitude and External Acceleration Using Inertial Sensor Measurement During Various Dynamic Conditions

    PubMed Central

    Lee, Jung Keun; Park, Edward J.; Robinovitch, Stephen N.

    2012-01-01

    This paper proposes a Kalman filter-based attitude (i.e., roll and pitch) estimation algorithm using an inertial sensor composed of a triaxial accelerometer and a triaxial gyroscope. In particular, the proposed algorithm has been developed for accurate attitude estimation during dynamic conditions, in which external acceleration is present. Although external acceleration is the main source of the attitude estimation error and despite the need for its accurate estimation in many applications, this problem that can be critical for the attitude estimation has not been addressed explicitly in the literature. Accordingly, this paper addresses the combined estimation problem of the attitude and external acceleration. Experimental tests were conducted to verify the performance of the proposed algorithm in various dynamic condition settings and to provide further insight into the variations in the estimation accuracy. Furthermore, two different approaches for dealing with the estimation problem during dynamic conditions were compared, i.e., threshold-based switching approach versus acceleration model-based approach. Based on an external acceleration model, the proposed algorithm was capable of estimating accurate attitudes and external accelerations for short accelerated periods, showing its high effectiveness during short-term fast dynamic conditions. Contrariwise, when the testing condition involved prolonged high external accelerations, the proposed algorithm exhibited gradually increasing errors. However, as soon as the condition returned to static or quasi-static conditions, the algorithm was able to stabilize the estimation error, regaining its high estimation accuracy. PMID:22977288

  17. Using cell phone location to assess misclassification errors in air pollution exposure estimation.

    PubMed

    Yu, Haofei; Russell, Armistead; Mulholland, James; Huang, Zhijiong

    2018-02-01

    Air pollution epidemiologic and health impact studies often rely on home addresses to estimate individual subject's pollution exposure. In this study, we used detailed cell phone location data, the call detail record (CDR), to account for the impact of spatiotemporal subject mobility on estimates of ambient air pollutant exposure. This approach was applied on a sample with 9886 unique simcard IDs in Shenzhen, China, on one mid-week day in October 2013. Hourly ambient concentrations of six chosen pollutants were simulated by the Community Multi-scale Air Quality model fused with observational data, and matched with detailed location data for these IDs. The results were compared with exposure estimates using home addresses to assess potential exposure misclassification errors. We found the misclassifications errors are likely to be substantial when home location alone is applied. The CDR based approach indicates that the home based approach tends to over-estimate exposures for subjects with higher exposure levels and under-estimate exposures for those with lower exposure levels. Our results show that the cell phone location based approach can be used to assess exposure misclassification error and has the potential for improving exposure estimates in air pollution epidemiology studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Estimating Missing Unit Process Data in Life Cycle Assessment Using a Similarity-Based Approach.

    PubMed

    Hou, Ping; Cai, Jiarui; Qu, Shen; Xu, Ming

    2018-05-01

    In life cycle assessment (LCA), collecting unit process data from the empirical sources (i.e., meter readings, operation logs/journals) is often costly and time-consuming. We propose a new computational approach to estimate missing unit process data solely relying on limited known data based on a similarity-based link prediction method. The intuition is that similar processes in a unit process network tend to have similar material/energy inputs and waste/emission outputs. We use the ecoinvent 3.1 unit process data sets to test our method in four steps: (1) dividing the data sets into a training set and a test set; (2) randomly removing certain numbers of data in the test set indicated as missing; (3) using similarity-weighted means of various numbers of most similar processes in the training set to estimate the missing data in the test set; and (4) comparing estimated data with the original values to determine the performance of the estimation. The results show that missing data can be accurately estimated when less than 5% data are missing in one process. The estimation performance decreases as the percentage of missing data increases. This study provides a new approach to compile unit process data and demonstrates a promising potential of using computational approaches for LCA data compilation.

  19. A fuel-based approach to estimating motor vehicle exhaust emissions

    NASA Astrophysics Data System (ADS)

    Singer, Brett Craig

    Motor vehicles contribute significantly to air pollution problems; accurate motor vehicle emission inventories are therefore essential to air quality planning. Current travel-based inventory models use emission factors measured from potentially biased vehicle samples and predict fleet-average emissions which are often inconsistent with on-road measurements. This thesis presents a fuel-based inventory approach which uses emission factors derived from remote sensing or tunnel-based measurements of on-road vehicles. Vehicle activity is quantified by statewide monthly fuel sales data resolved to the air basin level. Development of the fuel-based approach includes (1) a method for estimating cold start emission factors, (2) an analysis showing that fuel-normalized emission factors are consistent over a range of positive vehicle loads and that most fuel use occurs during loaded-mode driving, (3) scaling factors relating infrared hydrocarbon measurements to total exhaust volatile organic compound (VOC) concentrations, and (4) an analysis showing that economic factors should be considered when selecting on-road sampling sites. The fuel-based approach was applied to estimate carbon monoxide (CO) emissions from warmed-up vehicles in the Los Angeles area in 1991, and CO and VOC exhaust emissions for Los Angeles in 1997. The fuel-based CO estimate for 1991 was higher by a factor of 2.3 +/- 0.5 than emissions predicted by California's MVEI 7F model. Fuel-based inventory estimates for 1997 were higher than those of California's updated MVEI 7G model by factors of 2.4 +/- 0.2 for CO and 3.5 +/- 0.6 for VOC. Fuel-based estimates indicate a 20% decrease in the mass of CO emitted, despite an 8% increase in fuel use between 1991 and 1997; official inventory models predict a 50% decrease in CO mass emissions during the same period. Cold start CO and VOC emission factors derived from parking garage measurements were lower than those predicted by the MVEI 7G model. Current inventories in California appear to understate total exhaust CO and VOC emissions, while overstating the importance of cold start emissions. The fuel-based approach yields robust, independent, and accurate estimates of on-road vehicle emissions. Fuel-based estimates should be used to validate or adjust official vehicle emission inventories before society embarks on new, more costly air pollution control programs.

  20. Responder analysis without dichotomization.

    PubMed

    Zhang, Zhiwei; Chu, Jianxiong; Rahardja, Dewi; Zhang, Hui; Tang, Li

    2016-01-01

    In clinical trials, it is common practice to categorize subjects as responders and non-responders on the basis of one or more clinical measurements under pre-specified rules. Such a responder analysis is often criticized for the loss of information in dichotomizing one or more continuous or ordinal variables. It is worth noting that a responder analysis can be performed without dichotomization, because the proportion of responders for each treatment can be derived from a model for the original clinical variables (used to define a responder) and estimated by substituting maximum likelihood estimators of model parameters. This model-based approach can be considerably more efficient and more effective for dealing with missing data than the usual approach based on dichotomization. For parameter estimation, the model-based approach generally requires correct specification of the model for the original variables. However, under the sharp null hypothesis, the model-based approach remains unbiased for estimating the treatment difference even if the model is misspecified. We elaborate on these points and illustrate them with a series of simulation studies mimicking a study of Parkinson's disease, which involves longitudinal continuous data in the definition of a responder.

  1. Comparison of Methods for Estimating Evapotranspiration using Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Beamer, J. P.; Morton, C.; Huntington, J. L.; Pohll, G.

    2010-12-01

    Estimating the annual evapotranspiration (ET) in arid and semi-arid environments is important for managing water resources. In this study we use remote sensing methods to estimate ET from different areas located in western and eastern Nevada. Surface energy balance (SEB) and vegetation indices (VI) are two common methods for estimating ET using satellite data. The purpose of this study is to compare these methods for estimating annual ET and highlight strengths and weaknesses in both methods. The SEB approach used is based on the Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC) model, which estimates ET as a residual of the energy balance. METRIC has been shown to produce accurate results in agricultural and riparian settings. The VI approach used is based on statistical relationships between annual ET and various VI’s. The VI approaches have also shown to produce fairly accurate estimates of ET for various vegetation types, however consideration for spatial variations in potential ET and precipitation amount are generally ignored, leading to restrictions in their application. In this work we develop a VI approach that considers the study area potential ET and precipitation amount and compare this approach to METRIC and flux tower estimates of annual ET for several arid phreatophyte shrubs and irrigated agriculture settings.

  2. Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.

    PubMed

    Xiao Yang; Jianjiang Feng; Jie Zhou

    2014-05-01

    Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.

  3. Variance computations for functional of absolute risk estimates.

    PubMed

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  4. Variance computations for functional of absolute risk estimates

    PubMed Central

    Pfeiffer, R.M.; Petracci, E.

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476

  5. Estimation of Muscle Force Based on Neural Drive in a Hemispheric Stroke Survivor.

    PubMed

    Dai, Chenyun; Zheng, Yang; Hu, Xiaogang

    2018-01-01

    Robotic assistant-based therapy holds great promise to improve the functional recovery of stroke survivors. Numerous neural-machine interface techniques have been used to decode the intended movement to control robotic systems for rehabilitation therapies. In this case report, we tested the feasibility of estimating finger extensor muscle forces of a stroke survivor, based on the decoded descending neural drive through population motoneuron discharge timings. Motoneuron discharge events were obtained by decomposing high-density surface electromyogram (sEMG) signals of the finger extensor muscle. The neural drive was extracted from the normalized frequency of the composite discharge of the motoneuron pool. The neural-drive-based estimation was also compared with the classic myoelectric-based estimation. Our results showed that the neural-drive-based approach can better predict the force output, quantified by lower estimation errors and higher correlations with the muscle force, compared with the myoelectric-based estimation. Our findings suggest that the neural-drive-based approach can potentially be used as a more robust interface signal for robotic therapies during the stroke rehabilitation.

  6. Reconciling estimates of the contemporary North American carbon balance among terrestrial biosphere models, atmospheric inversions, and a new approach for estimating net ecosystem exchange from inventory-based data

    USGS Publications Warehouse

    Hayes, Daniel J.; Turner, David P.; Stinson, Graham; McGuire, A. David; Wei, Yaxing; West, Tristram O.; Heath, Linda S.; de Jong, Bernardus; McConkey, Brian G.; Birdsey, Richard A.; Kurz, Werner A.; Jacobson, Andrew R.; Huntzinger, Deborah N.; Pan, Yude; Post, W. Mac; Cook, Robert B.

    2012-01-01

    We develop an approach for estimating net ecosystem exchange (NEE) using inventory-based information over North America (NA) for a recent 7-year period (ca. 2000–2006). The approach notably retains information on the spatial distribution of NEE, or the vertical exchange between land and atmosphere of all non-fossil fuel sources and sinks of CO2, while accounting for lateral transfers of forest and crop products as well as their eventual emissions. The total NEE estimate of a -327 ± 252 TgC yr-1 sink for NA was driven primarily by CO2 uptake in the Forest Lands sector (-248 TgC yr-1), largely in the Northwest and Southeast regions of the US, and in the Crop Lands sector (-297 TgC yr-1), predominantly in the Midwest US states. These sinks are counteracted by the carbon source estimated for the Other Lands sector (+218 TgC yr-1), where much of the forest and crop products are assumed to be returned to the atmosphere (through livestock and human consumption). The ecosystems of Mexico are estimated to be a small net source (+18 TgC yr-1) due to land use change between 1993 and 2002. We compare these inventory-based estimates with results from a suite of terrestrial biosphere and atmospheric inversion models, where the mean continental-scale NEE estimate for each ensemble is -511 TgC yr-1 and -931 TgC yr-1, respectively. In the modeling approaches, all sectors, including Other Lands, were generally estimated to be a carbon sink, driven in part by assumed CO2 fertilization and/or lack of consideration of carbon sources from disturbances and product emissions. Additional fluxes not measured by the inventories, although highly uncertain, could add an additional -239 TgC yr-1 to the inventory-based NA sink estimate, thus suggesting some convergence with the modeling approaches.

  7. Effects of lidar pulse density and sample size on a model-assisted approach to estimate forest inventory variables

    Treesearch

    Jacob Strunk; Hailemariam Temesgen; Hans-Erik Andersen; James P. Flewelling; Lisa Madsen

    2012-01-01

    Using lidar in an area-based model-assisted approach to forest inventory has the potential to increase estimation precision for some forest inventory variables. This study documents the bias and precision of a model-assisted (regression estimation) approach to forest inventory with lidar-derived auxiliary variables relative to lidar pulse density and the number of...

  8. Probabilistic segmentation and intensity estimation for microarray images.

    PubMed

    Gottardo, Raphael; Besag, Julian; Stephens, Matthew; Murua, Alejandro

    2006-01-01

    We describe a probabilistic approach to simultaneous image segmentation and intensity estimation for complementary DNA microarray experiments. The approach overcomes several limitations of existing methods. In particular, it (a) uses a flexible Markov random field approach to segmentation that allows for a wider range of spot shapes than existing methods, including relatively common 'doughnut-shaped' spots; (b) models the image directly as background plus hybridization intensity, and estimates the two quantities simultaneously, avoiding the common logical error that estimates of foreground may be less than those of the corresponding background if the two are estimated separately; and (c) uses a probabilistic modeling approach to simultaneously perform segmentation and intensity estimation, and to compute spot quality measures. We describe two approaches to parameter estimation: a fast algorithm, based on the expectation-maximization and the iterated conditional modes algorithms, and a fully Bayesian framework. These approaches produce comparable results, and both appear to offer some advantages over other methods. We use an HIV experiment to compare our approach to two commercial software products: Spot and Arrayvision.

  9. An objective analysis of the dynamic nature of field capacity

    NASA Astrophysics Data System (ADS)

    Twarakavi, Navin K. C.; Sakai, Masaru; Å Imå¯Nek, Jirka

    2009-10-01

    Field capacity is one of the most commonly used, and yet poorly defined, soil hydraulic properties. Traditionally, field capacity has been defined as the amount of soil moisture after excess water has drained away and the rate of downward movement has materially decreased. Unfortunately, this qualitative definition does not lend itself to an unambiguous quantitative approach for estimation. Because of the vagueness in defining what constitutes "drainage of excess water" from a soil, the estimation of field capacity has often been based upon empirical guidelines. These empirical guidelines are either time, pressure, or flux based. In this paper, we developed a numerical approach to estimate field capacity using a flux-based definition. The resulting approach was implemented on the soil parameter data set used by Schaap et al. (2001), and the estimated field capacity was compared to traditional definitions of field capacity. The developed modeling approach was implemented using the HYDRUS-1D software with the capability of simultaneously estimating field capacity for multiple soils with soil hydraulic parameter data. The Richards equation was used in conjunction with the van Genuchten-Mualem model to simulate variably saturated flow in a soil. Using the modeling approach to estimate field capacity also resulted in additional information such as (1) the pressure head, at which field capacity is attained, and (2) the drainage time needed to reach field capacity from saturated conditions under nonevaporative conditions. We analyzed the applicability of the modeling-based approach to estimate field capacity on real-world soils data. We also used the developed method to create contour diagrams showing the variation of field capacity with texture. It was found that using benchmark pressure heads to estimate field capacity from the retention curve leads to inaccurate results. Finally, a simple analytical equation was developed to predict field capacity from soil hydraulic parameter information. The analytical equation was found to be effective in its ability to predict field capacities.

  10. Location Estimation of Urban Images Based on Geographical Neighborhoods

    NASA Astrophysics Data System (ADS)

    Huang, Jie; Lo, Sio-Long

    2018-04-01

    Estimating the location of an image is a challenging computer vision problem, and the recent decade has witnessed increasing research efforts towards the solution of this problem. In this paper, we propose a new approach to the location estimation of images taken in urban environments. Experiments are conducted to quantitatively compare the estimation accuracy of our approach, against three representative approaches in the existing literature, using a recently published dataset of over 150 thousand Google Street View images and 259 user uploaded images as queries. According to the experimental results, our approach outperforms three baseline approaches and shows its robustness across different distance thresholds.

  11. The Estimation of Gestational Age at Birth in Database Studies.

    PubMed

    Eberg, Maria; Platt, Robert W; Filion, Kristian B

    2017-11-01

    Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.

  12. Nonlinear finite element model updating for damage identification of civil structures using batch Bayesian estimation

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.

    2017-02-01

    This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.

  13. Variability of pesticide detections and concentrations in field replicate water samples collected for the National Water-Quality Assessment Program, 1992-97

    USGS Publications Warehouse

    Martin, Jeffrey D.

    2002-01-01

    Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.

  14. Estimating brain connectivity when few data points are available: Perspectives and limitations.

    PubMed

    Antonacci, Yuri; Toppi, Jlenia; Caschera, Stefano; Anzolin, Alessandra; Mattia, Donatella; Astolfi, Laura

    2017-07-01

    Methods based on the use of multivariate autoregressive modeling (MVAR) have proved to be an accurate and flexible tool for the estimation of brain functional connectivity. The multivariate approach, however, implies the use of a model whose complexity (in terms of number of parameters) increases quadratically with the number of signals included in the problem. This can often lead to an underdetermined problem and to the condition of multicollinearity. The aim of this paper is to introduce and test an approach based on Ridge Regression combined with a modified version of the statistics usually adopted for these methods, to broaden the estimation of brain connectivity to those conditions in which current methods fail, due to the lack of enough data points. We tested the performances of this new approach, in comparison with the classical approach based on ordinary least squares (OLS), by means of a simulation study implementing different ground-truth networks, under different network sizes and different levels of data points. Simulation results showed that the new approach provides better performances, in terms of accuracy of the parameters estimation and false positives/false negatives rates, in all conditions related to a low data points/model dimension ratio, and may thus be exploited to estimate and validate estimated patterns at single-trial level or when short time data segments are available.

  15. Heading Estimation for Pedestrian Dead Reckoning Based on Robust Adaptive Kalman Filtering.

    PubMed

    Wu, Dongjin; Xia, Linyuan; Geng, Jijun

    2018-06-19

    Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. Heading estimation error is one of the main location error sources, and therefore, in order to improve the location tracking performance of the PDR method in complex environments, an approach based on robust adaptive Kalman filtering (RAKF) for estimating accurate headings is proposed. In our approach, outputs from gyroscope, accelerometer, and magnetometer sensors are fused using the solution of Kalman filtering (KF) that the heading measurements derived from accelerations and magnetic field data are used to correct the states integrated from angular rates. In order to identify and control measurement outliers, a maximum likelihood-type estimator (M-estimator)-based model is used. Moreover, an adaptive factor is applied to resist the negative effects of state model disturbances. Extensive experiments under static and dynamic conditions were conducted in indoor environments. The experimental results demonstrate the proposed approach provides more accurate heading estimates and supports more robust and dynamic adaptive location tracking, compared with methods based on conventional KF.

  16. NMR permeability estimators in 'chalk' carbonate rocks obtained under different relaxation times and MICP size scalings

    NASA Astrophysics Data System (ADS)

    Rios, Edmilson Helton; Figueiredo, Irineu; Moss, Adam Keith; Pritchard, Timothy Neil; Glassborow, Brent Anthony; Guedes Domingues, Ana Beatriz; Bagueira de Vasconcellos Azeredo, Rodrigo

    2016-07-01

    The effect of the selection of different nuclear magnetic resonance (NMR) relaxation times for permeability estimation is investigated for a set of fully brine-saturated rocks acquired from Cretaceous carbonate reservoirs in the North Sea and Middle East. Estimators that are obtained from the relaxation times based on the Pythagorean means are compared with estimators that are obtained from the relaxation times based on the concept of a cumulative saturation cut-off. Select portions of the longitudinal (T1) and transverse (T2) relaxation-time distributions are systematically evaluated by applying various cut-offs, analogous to the Winland-Pittman approach for mercury injection capillary pressure (MICP) curves. Finally, different approaches to matching the NMR and MICP distributions using different mean-based scaling factors are validated based on the performance of the related size-scaled estimators. The good results that were obtained demonstrate possible alternatives to the commonly adopted logarithmic mean estimator and reinforce the importance of NMR-MICP integration to improving carbonate permeability estimates.

  17. Kalman filter-based EM-optical sensor fusion for needle deflection estimation.

    PubMed

    Jiang, Baichuan; Gao, Wenpeng; Kacher, Daniel; Nevo, Erez; Fetics, Barry; Lee, Thomas C; Jayender, Jagadeesan

    2018-04-01

    In many clinical procedures such as cryoablation that involves needle insertion, accurate placement of the needle's tip at the desired target is the major issue for optimizing the treatment and minimizing damage to the neighboring anatomy. However, due to the interaction force between the needle and tissue, considerable error in intraoperative tracking of the needle tip can be observed as needle deflects. In this paper, measurements data from an optical sensor at the needle base and a magnetic resonance (MR) gradient field-driven electromagnetic (EM) sensor placed 10 cm from the needle tip are used within a model-integrated Kalman filter-based sensor fusion scheme. Bending model-based estimations and EM-based direct estimation are used as the measurement vectors in the Kalman filter, thus establishing an online estimation approach. Static tip bending experiments show that the fusion method can reduce the mean error of the tip position estimation from 29.23 mm of the optical sensor-based approach to 3.15 mm of the fusion-based approach and from 39.96 to 6.90 mm, at the MRI isocenter and the MRI entrance, respectively. This work established a novel sensor fusion scheme that incorporates model information, which enables real-time tracking of needle deflection with MRI compatibility, in a free-hand operating setup.

  18. Pilot estimates of glidepath and aim point during simulated landing approaches

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.

    1981-01-01

    Pilot perceptions of glidepath angle and aim point were measured during simulated landings. A fixed-base cockpit simulator was used with video recordings of simulated landing approaches shown on a video projector. Pilots estimated the magnitudes of approach errors during observation without attempting to make corrections. Pilots estimated glidepath angular errors well, but had difficulty estimating aim-point errors. The data make plausible the hypothesis that pilots are little concerned with aim point during most of an approach, concentrating instead on keeping close to the nominal glidepath and trusting this technique to guide them to the proper touchdown point.

  19. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less

  20. Using SEM to Analyze Complex Survey Data: A Comparison between Design-Based Single-Level and Model-Based Multilevel Approaches

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Kwok, Oi-man

    2012-01-01

    Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…

  1. Predicting fundamental and realized distributions based on thermal niche: A case study of a freshwater turtle

    NASA Astrophysics Data System (ADS)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.

    2018-04-01

    Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.

  2. 3D pose estimation and motion analysis of the articulated human hand-forearm limb in an industrial production environment

    NASA Astrophysics Data System (ADS)

    Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz

    2010-09-01

    This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.

  3. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less

  4. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  5. New, national bottom-up estimate for tree-based biological ...

    EPA Pesticide Factsheets

    Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating

  6. An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    imon, Donald L.; Armstrong, Jeffrey B.

    2012-01-01

    A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.

  7. PoMo: An Allele Frequency-Based Approach for Species Tree Estimation

    PubMed Central

    De Maio, Nicola; Schrempf, Dominik; Kosiol, Carolin

    2015-01-01

    Incomplete lineage sorting can cause incongruencies of the overall species-level phylogenetic tree with the phylogenetic trees for individual genes or genomic segments. If these incongruencies are not accounted for, it is possible to incur several biases in species tree estimation. Here, we present a simple maximum likelihood approach that accounts for ancestral variation and incomplete lineage sorting. We use a POlymorphisms-aware phylogenetic MOdel (PoMo) that we have recently shown to efficiently estimate mutation rates and fixation biases from within and between-species variation data. We extend this model to perform efficient estimation of species trees. We test the performance of PoMo in several different scenarios of incomplete lineage sorting using simulations and compare it with existing methods both in accuracy and computational speed. In contrast to other approaches, our model does not use coalescent theory but is allele frequency based. We show that PoMo is well suited for genome-wide species tree estimation and that on such data it is more accurate than previous approaches. PMID:26209413

  8. Extended reactance domain algorithms for DoA estimation onto an ESPAR antennas

    NASA Astrophysics Data System (ADS)

    Harabi, F.; Akkar, S.; Gharsallah, A.

    2016-07-01

    Based on an extended reactance domain (RD) covariance matrix, this article proposes new alternatives for directions of arrival (DoAs) estimation of narrowband sources through an electronically steerable parasitic array radiator (ESPAR) antennas. Because of the centro symmetry of the classic ESPAR antennas, an unitary transformation is applied to the collected data that allow an important reduction in both computational cost and processing time and, also, an enhancement of the resolution capabilities of the proposed algorithms. Moreover, this article proposes a new approach for eigenvalues estimation through only some linear operations. The developed DoAs estimation algorithms based on this new approach has illustrated a good behaviour with less calculation cost and processing time as compared to other schemes based on the classic eigenvalues approach. The conducted simulations demonstrate that high-precision and high-resolution DoAs estimation can be reached especially in very closely sources situation and low sources power as compared to the RD-MUSIC algorithm and the RD-PM algorithm. The asymptotic behaviours of the proposed DoAs estimators are analysed in various scenarios and compared with the Cramer-Rao bound (CRB). The conducted simulations testify the high-resolution of the developed algorithms and prove the efficiently of the proposed approach.

  9. Three-dimensional dominant frequency mapping using autoregressive spectral analysis of atrial electrograms of patients in persistent atrial fibrillation.

    PubMed

    Salinet, João L; Masca, Nicholas; Stafford, Peter J; Ng, G André; Schlindwein, Fernando S

    2016-03-08

    Areas with high frequency activity within the atrium are thought to be 'drivers' of the rhythm in patients with atrial fibrillation (AF) and ablation of these areas seems to be an effective therapy in eliminating DF gradient and restoring sinus rhythm. Clinical groups have applied the traditional FFT-based approach to generate the three-dimensional dominant frequency (3D DF) maps during electrophysiology (EP) procedures but literature is restricted on using alternative spectral estimation techniques that can have a better frequency resolution that FFT-based spectral estimation. Autoregressive (AR) model-based spectral estimation techniques, with emphasis on selection of appropriate sampling rate and AR model order, were implemented to generate high-density 3D DF maps of atrial electrograms (AEGs) in persistent atrial fibrillation (persAF). For each patient, 2048 simultaneous AEGs were recorded for 20.478 s-long segments in the left atrium (LA) and exported for analysis, together with their anatomical locations. After the DFs were identified using AR-based spectral estimation, they were colour coded to produce sequential 3D DF maps. These maps were systematically compared with maps found using the Fourier-based approach. 3D DF maps can be obtained using AR-based spectral estimation after AEGs downsampling (DS) and the resulting maps are very similar to those obtained using FFT-based spectral estimation (mean 90.23 %). There were no significant differences between AR techniques (p = 0.62). The processing time for AR-based approach was considerably shorter (from 5.44 to 5.05 s) when lower sampling frequencies and model order values were used. Higher levels of DS presented higher rates of DF agreement (sampling frequency of 37.5 Hz). We have demonstrated the feasibility of using AR spectral estimation methods for producing 3D DF maps and characterised their differences to the maps produced using the FFT technique, offering an alternative approach for 3D DF computation in human persAF studies.

  10. A Bayesian approach to multisource forest area estimation

    Treesearch

    Andrew O. Finley

    2007-01-01

    In efforts such as land use change monitoring, carbon budgeting, and forecasting ecological conditions and timber supply, demand is increasing for regional and national data layers depicting forest cover. These data layers must permit small area estimates of forest and, most importantly, provide associated error estimates. This paper presents a model-based approach for...

  11. Small area estimation of proportions with different levels of auxiliary data.

    PubMed

    Chandra, Hukum; Kumar, Sushil; Aditya, Kaustav

    2018-03-01

    Binary data are often of interest in many small areas of applications. The use of standard small area estimation methods based on linear mixed models becomes problematic for such data. An empirical plug-in predictor (EPP) under a unit-level generalized linear mixed model with logit link function is often used for the estimation of a small area proportion. However, this EPP requires the availability of unit-level population information for auxiliary data that may not be always accessible. As a consequence, in many practical situations, this EPP approach cannot be applied. Based on the level of auxiliary information available, different small area predictors for estimation of proportions are proposed. Analytic and bootstrap approaches to estimating the mean squared error of the proposed small area predictors are also developed. Monte Carlo simulations based on both simulated and real data show that the proposed small area predictors work well for generating the small area estimates of proportions and represent a practical alternative to the above approach. The developed predictor is applied to generate estimates of the proportions of indebted farm households at district-level using debt investment survey data from India. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A comparison between conventional and LANDSAT based hydrologic modeling: The Four Mile Run case study

    NASA Technical Reports Server (NTRS)

    Ragan, R. M.; Jackson, T. J.; Fitch, W. N.; Shubinski, R. P.

    1976-01-01

    Models designed to support the hydrologic studies associated with urban water resources planning require input parameters that are defined in terms of land cover. Estimating the land cover is a difficult and expensive task when drainage areas larger than a few sq. km are involved. Conventional and LANDSAT based methods for estimating the land cover based input parameters required by hydrologic planning models were compared in a case study of the 50.5 sq. km (19.5 sq. mi) Four Mile Run Watershed in Virginia. Results of the study indicate that the LANDSAT based approach is highly cost effective for planning model studies. The conventional approach to define inputs was based on 1:3600 aerial photos, required 110 man-days and a total cost of $14,000. The LANDSAT based approach required 6.9 man-days and cost $2,350. The conventional and LANDSAT based models gave similar results relative to discharges and estimated annual damages expected from no flood control, channelization, and detention storage alternatives.

  13. Minimum area requirements for an at-risk butterfly based on movement and demography.

    PubMed

    Brown, Leone M; Crone, Elizabeth E

    2016-02-01

    Determining the minimum area required to sustain populations has a long history in theoretical and conservation biology. Correlative approaches are often used to estimate minimum area requirements (MARs) based on relationships between area and the population size required for persistence or between species' traits and distribution patterns across landscapes. Mechanistic approaches to estimating MAR facilitate prediction across space and time but are few. We used a mechanistic MAR model to determine the critical minimum patch size (CMP) for the Baltimore checkerspot butterfly (Euphydryas phaeton), a locally abundant species in decline along its southern range, and sister to several federally listed species. Our CMP is based on principles of diffusion, where individuals in smaller patches encounter edges and leave with higher probability than those in larger patches, potentially before reproducing. We estimated a CMP for the Baltimore checkerspot of 0.7-1.5 ha, in accordance with trait-based MAR estimates. The diffusion rate on which we based this CMP was broadly similar when estimated at the landscape scale (comparing flight path vs. capture-mark-recapture data), and the estimated population growth rate was consistent with observed site trends. Our mechanistic approach to estimating MAR is appropriate for species whose movement follows a correlated random walk and may be useful where landscape-scale distributions are difficult to assess, but demographic and movement data are obtainable from a single site or the literature. Just as simple estimates of lambda are often used to assess population viability, the principles of diffusion and CMP could provide a starting place for estimating MAR for conservation. © 2015 Society for Conservation Biology.

  14. Approaches for the direct estimation of lambda, and demographic contributions to lambda, using capture-recapture data

    USGS Publications Warehouse

    Nichols, James D.; Hines, James E.

    2002-01-01

    We first consider the estimation of the finite rate of population increase or population growth rate, u i , using capture-recapture data from open populations. We review estimation and modelling of u i under three main approaches to modelling openpopulation data: the classic approach of Jolly (1965) and Seber (1965), the superpopulation approach of Crosbie & Manly (1985) and Schwarz & Arnason (1996), and the temporal symmetry approach of Pradel (1996). Next, we consider the contributions of different demographic components to u i using a probabilistic approach based on the composition of the population at time i + 1 (Nichols et al., 2000b). The parameters of interest are identical to the seniority parameters, n i , of Pradel (1996). We review estimation of n i under the classic, superpopulation, and temporal symmetry approaches. We then compare these direct estimation approaches for u i and n i with analogues computed using projection matrix asymptotics. We also discuss various extensions of the estimation approaches to multistate applications and to joint likelihoods involving multiple data types.

  15. Approaches for the direct estimation of lambda, and demographic contributions to lambda, using capture-recapture data

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.

    2002-01-01

    We first consider the estimation of the finite rate of population increase or population growth rate, lambda sub i, using capture-recapture data from open populations. We review estimation and modelling of lambda sub i under three main approaches to modelling open-population data: the classic approach of Jolly (1965) and Seber (1965), the superpopulation approach of Crosbie & Manly (1985) and Schwarz & Arnason (1996), and the temporal symmetry approach of Pradel (1996). Next, we consider the contributions of different demographic components to lambda sub i using a probabilistic approach based on the composition of the population at time i + 1 (Nichols et al., 2000b). The parameters of interest are identical to the seniority parameters, gamma sub i, of Pradel (1996). We review estimation of gamma sub i under the classic, superpopulation, and temporal symmetry approaches. We then compare these direct estimation approaches for lambda sub i and gamma sub i with analogues computed using projection matrix asymptotics. We also discuss various extensions of the estimation approaches to multistate applications and to joint likelihoods involving multiple data types.

  16. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  17. Deep convolutional neural network for mammographic density segmentation

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Li, Songfeng; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Lu, Yao; Zhou, Chuan; Hadjiiski, Lubomir; Samala, Ravi K.

    2018-02-01

    Breast density is one of the most significant factors for cancer risk. In this study, we proposed a supervised deep learning approach for automated estimation of percentage density (PD) on digital mammography (DM). The deep convolutional neural network (DCNN) was trained to estimate a probability map of breast density (PMD). PD was calculated as the ratio of the dense area to the breast area based on the probability of each pixel belonging to dense region or fatty region at a decision threshold of 0.5. The DCNN estimate was compared to a feature-based statistical learning approach, in which gray level, texture and morphological features were extracted from each ROI and the least absolute shrinkage and selection operator (LASSO) was used to select and combine the useful features to generate the PMD. The reference PD of each image was provided by two experienced MQSA radiologists. With IRB approval, we retrospectively collected 347 DMs from patient files at our institution. The 10-fold cross-validation results showed a strong correlation r=0.96 between the DCNN estimation and interactive segmentation by radiologists while that of the feature-based statistical learning approach vs radiologists' segmentation had a correlation r=0.78. The difference between the segmentation by DCNN and by radiologists was significantly smaller than that between the feature-based learning approach and radiologists (p < 0.0001) by two-tailed paired t-test. This study demonstrated that the DCNN approach has the potential to replace radiologists' interactive thresholding in PD estimation on DMs.

  18. Tree-centric mapping of forest carbon density from airborne laser scanning and hyperspectral data.

    PubMed

    Dalponte, Michele; Coomes, David A

    2016-10-01

    Forests are a major component of the global carbon cycle, and accurate estimation of forest carbon stocks and fluxes is important in the context of anthropogenic global change. Airborne laser scanning (ALS) data sets are increasingly recognized as outstanding data sources for high-fidelity mapping of carbon stocks at regional scales.We develop a tree-centric approach to carbon mapping, based on identifying individual tree crowns (ITCs) and species from airborne remote sensing data, from which individual tree carbon stocks are calculated. We identify ITCs from the laser scanning point cloud using a region-growing algorithm and identifying species from airborne hyperspectral data by machine learning. For each detected tree, we predict stem diameter from its height and crown-width estimate. From that point on, we use well-established approaches developed for field-based inventories: above-ground biomasses of trees are estimated using published allometries and summed within plots to estimate carbon density.We show this approach is highly reliable: tests in the Italian Alps demonstrated a close relationship between field- and ALS-based estimates of carbon stocks ( r 2  = 0·98). Small trees are invisible from the air, and a correction factor is required to accommodate this effect.An advantage of the tree-centric approach over existing area-based methods is that it can produce maps at any scale and is fundamentally based on field-based inventory methods, making it intuitive and transparent. Airborne laser scanning, hyperspectral sensing and computational power are all advancing rapidly, making it increasingly feasible to use ITC approaches for effective mapping of forest carbon density also inside wider carbon mapping programs like REDD++.

  19. Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.

    PubMed

    Hack, C Eric

    2006-04-17

    Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.

  20. Evaluation and Application of Satellite-Based Latent Heating Profile Estimation Methods

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Grecu, Mircea; Yang, Song; Tao, Wei-Kuo

    2004-01-01

    In recent years, methods for estimating atmospheric latent heating vertical structure from both passive and active microwave remote sensing have matured to the point where quantitative evaluation of these methods is the next logical step. Two approaches for heating algorithm evaluation are proposed: First, application of heating algorithms to synthetic data, based upon cloud-resolving model simulations, can be used to test the internal consistency of heating estimates in the absence of systematic errors in physical assumptions. Second, comparisons of satellite-retrieved vertical heating structures to independent ground-based estimates, such as rawinsonde-derived analyses of heating, provide an additional test. The two approaches are complementary, since systematic errors in heating indicated by the second approach may be confirmed by the first. A passive microwave and combined passive/active microwave heating retrieval algorithm are evaluated using the described approaches. In general, the passive microwave algorithm heating profile estimates are subject to biases due to the limited vertical heating structure information contained in the passive microwave observations. These biases may be partly overcome by including more environment-specific a priori information into the algorithm s database of candidate solution profiles. The combined passive/active microwave algorithm utilizes the much higher-resolution vertical structure information provided by spaceborne radar data to produce less biased estimates; however, the global spatio-temporal sampling by spaceborne radar is limited. In the present study, the passive/active microwave algorithm is used to construct a more physically-consistent and environment-specific set of candidate solution profiles for the passive microwave algorithm and to help evaluate errors in the passive algorithm s heating estimates. Although satellite estimates of latent heating are based upon instantaneous, footprint- scale data, suppression of random errors requires averaging to at least half-degree resolution. Analysis of mesoscale and larger space-time scale phenomena based upon passive and passive/active microwave heating estimates from TRMM, SSMI, and AMSR data will be presented at the conference.

  1. Estimating and comparing microbial diversity in the presence of sequencing errors

    PubMed Central

    Chiu, Chun-Huo

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q ≥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872

  2. Approximate median regression for complex survey data with skewed response.

    PubMed

    Fraser, Raphael André; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett M; Pan, Yi

    2016-12-01

    The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling, and weighting. In this article, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS)'based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. © 2016, The International Biometric Society.

  3. Histogram equalization with Bayesian estimation for noise robust speech recognition.

    PubMed

    Suh, Youngjoo; Kim, Hoirin

    2018-02-01

    The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.

  4. Approximate Median Regression for Complex Survey Data with Skewed Response

    PubMed Central

    Fraser, Raphael André; Lipsitz, Stuart R.; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Pan, Yi

    2016-01-01

    Summary The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling and weighting. In this paper, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS) based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. PMID:27062562

  5. Estimating roadside encroachment rates with the combined strengths of accident- and encroachment-based approaches

    DOT National Transportation Integrated Search

    2001-09-01

    In two recent studies by Miaou, he proposed a method to estimate vehicle roadside encroachment rates using accident-based models. He further illustrated the use of this method to estimate roadside encroachment rates for rural two-lane undivided roads...

  6. Modeling Of In-Vehicle Human Exposure to Ambient Fine Particulate Matter

    PubMed Central

    Liu, Xiaozhen; Frey, H. Christopher

    2012-01-01

    A method for estimating in-vehicle PM2.5 exposure as part of a scenario-based population simulation model is developed and assessed. In existing models, such as the Stochastic Exposure and Dose Simulation model for Particulate Matter (SHEDS-PM), in-vehicle exposure is estimated using linear regression based on area-wide ambient PM2.5 concentration. An alternative modeling approach is explored based on estimation of near-road PM2.5 concentration and an in-vehicle mass balance. Near-road PM2.5 concentration is estimated using a dispersion model and fixed site monitor (FSM) data. In-vehicle concentration is estimated based on air exchange rate and filter efficiency. In-vehicle concentration varies with road type, traffic flow, windspeed, stability class, and ventilation. Average in-vehicle exposure is estimated to contribute 10 to 20 percent of average daily exposure. The contribution of in-vehicle exposure to total daily exposure can be higher for some individuals. Recommendations are made for updating exposure models and implementation of the alternative approach. PMID:23101000

  7. Angular velocity estimation based on star vector with improved current statistical model Kalman filter.

    PubMed

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, He

    2016-11-20

    Angular velocity information is a requisite for a spacecraft guidance, navigation, and control system. In this paper, an approach for angular velocity estimation based merely on star vector measurement with an improved current statistical model Kalman filter is proposed. High-precision angular velocity estimation can be achieved under dynamic conditions. The amount of calculation is also reduced compared to a Kalman filter. Different trajectories are simulated to test this approach, and experiments with real starry sky observation are implemented for further confirmation. The estimation accuracy is proved to be better than 10-4  rad/s under various conditions. Both the simulation and the experiment demonstrate that the described approach is effective and shows an excellent performance under both static and dynamic conditions.

  8. Statistical estimation via convex optimization for trending and performance monitoring

    NASA Astrophysics Data System (ADS)

    Samar, Sikandar

    This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.

  9. Safe Maneuvering Envelope Estimation Based on a Physical Approach

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas J. J.; Schuet, Stefan R.; Wheeler, Kevin R.; Acosta, Diana; Kaneshige, John T.

    2013-01-01

    This paper discusses a computationally efficient algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. This approach differs from others since it is physically inspired. This more transparent approach allows interpreting data in each step, and it is assumed that these physical models based upon flight dynamics theory will therefore facilitate certification for future real life applications.

  10. Time series sightability modeling of animal populations.

    PubMed

    ArchMiller, Althea A; Dorazio, Robert M; St Clair, Katherine; Fieberg, John R

    2018-01-01

    Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  11. Time series sightability modeling of animal populations

    USGS Publications Warehouse

    ArchMiller, Althea A.; Dorazio, Robert; St. Clair, Katherine; Fieberg, John R.

    2018-01-01

    Logistic regression models—or “sightability models”—fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.

  12. Performance enhancement of low-cost, high-accuracy, state estimation for vehicle collision prevention system using ANFIS

    NASA Astrophysics Data System (ADS)

    Saadeddin, Kamal; Abdel-Hafez, Mamoun F.; Jaradat, Mohammad A.; Jarrah, Mohammad Amin

    2013-12-01

    In this paper, a low-cost navigation system that fuses the measurements of the inertial navigation system (INS) and the global positioning system (GPS) receiver is developed. First, the system's dynamics are obtained based on a vehicle's kinematic model. Second, the INS and GPS measurements are fused using an extended Kalman filter (EKF) approach. Subsequently, an artificial intelligence based approach for the fusion of INS/GPS measurements is developed based on an Input-Delayed Adaptive Neuro-Fuzzy Inference System (IDANFIS). Experimental tests are conducted to demonstrate the performance of the two sensor fusion approaches. It is found that the use of the proposed IDANFIS approach achieves a reduction in the integration development time and an improvement in the estimation accuracy of the vehicle's position and velocity compared to the EKF based approach.

  13. Estimator banks: a new tool for direction-of-arrival estimation

    NASA Astrophysics Data System (ADS)

    Gershman, Alex B.; Boehme, Johann F.

    1997-10-01

    A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.

  14. Intercomparisons of Prognostic, Diagnostic, and Inversion Modeling Approaches for Estimation of Net Ecosystem Exchange over the Pacific Northwest Region

    NASA Astrophysics Data System (ADS)

    Turner, D. P.; Jacobson, A. R.; Nemani, R. R.

    2013-12-01

    The recent development of large spatially-explicit datasets for multiple variables relevant to monitoring terrestrial carbon flux offers the opportunity to estimate the terrestrial land flux using several alternative, potentially complimentary, approaches. Here we developed and compared regional estimates of net ecosystem exchange (NEE) over the Pacific Northwest region of the U.S. using three approaches. In the prognostic modeling approach, the process-based Biome-BGC model was driven by distributed meteorological station data and was informed by Landsat-based coverages of forest stand age and disturbance regime. In the diagnostic modeling approach, the quasi-mechanistic CFLUX model estimated net ecosystem production (NEP) by upscaling eddy covariance flux tower observations. The model was driven by distributed climate data and MODIS FPAR (the fraction of incident PAR that is absorbed by the vegetation canopy). It was informed by coarse resolution (1 km) data about forest stand age. In both the prognostic and diagnostic modeling approaches, emissions estimates for biomass burning, harvested products, and river/stream evasion were added to model-based NEP to get NEE. The inversion model (CarbonTracker) relied on observations of atmospheric CO2 concentration to optimize prior surface carbon flux estimates. The Pacific Northwest is heterogeneous with respect to land cover and forest management, and repeated surveys of forest inventory plots support the presence of a strong regional carbon sink. The diagnostic model suggested a stronger carbon sink than the prognostic model, and a much larger sink that the inversion model. The introduction of Landsat data on disturbance history served to reduce uncertainty with respect to regional NEE in the diagnostic and prognostic modeling approaches. The FPAR data was particularly helpful in capturing the seasonality of the carbon flux using the diagnostic modeling approach. The inversion approach took advantage of a global network of CO2 observation stations, but had difficulty resolving regional fluxes such as that in the PNW given the still sparse nature of the CO2 measurement network.

  15. Combining facial dynamics with appearance for age estimation.

    PubMed

    Dibeklioglu, Hamdi; Alnajar, Fares; Ali Salah, Albert; Gevers, Theo

    2015-06-01

    Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We propose a method to extract and use dynamic features for age estimation, using a person's smile. Our approach is tested on a large, gender-balanced database with 400 subjects, with an age range between 8 and 76. In addition, we introduce a new database on posed disgust expressions with 324 subjects in the same age range, and evaluate the reliability of the proposed approach when used with another expression. State-of-the-art appearance-based age estimation methods from the literature are implemented as baseline. We demonstrate that for each of these methods, the addition of the proposed dynamic features results in statistically significant improvement. We further propose a novel hierarchical age estimation architecture based on adaptive age grouping. We test our approach extensively, including an exploration of spontaneous versus posed smile dynamics, and gender-specific age estimation. We show that using spontaneity information reduces the mean absolute error by up to 21%, advancing the state of the art for facial age estimation.

  16. Towards an Optimal Noise Versus Resolution Trade-Off in Wind Scatterometry

    NASA Technical Reports Server (NTRS)

    Williams, Brent A.

    2011-01-01

    A scatterometer is a radar that measures the normalized radar cross section sigma(sup 0) of the Earth's surface. Over the ocean this signal is related to the wind via the geophysical model function (GMF). The objective of wind scatterometry is to estimate the wind vector field from sigma(sup 0) measurements; however, there are many subtleties that complicate this problem-making it difficult to obtain a unique wind field estimate. Conventionally, wind estimation is split into two stages: a wind retrieval stage in which several ambiguous solutions are obtained, and an ambiguity removal stage in which ambiguities are chosen to produce an appropriate wind vector field estimate. The most common approach to wind field estimation is to grid the scatterometer swath into wind vector cells and estimate wind vector ambiguities independently for each cell. Then, field wise structure is imposed on the solution by an ambiguity selection routine. Although this approach is simple and practical, it neglects field wise structure in the retrieval step and does not account for the spatial correlation imposed by the sampling. This makes it difficult to develop a theoretically appropriate noise versus resolution trade-off using pointwise retrieval. Fieldwise structure may be imposed in the retrieval step using a model-based approach. However, this approach is generally only practical if a low order wind field model is applied, which may discard more information than is desired. Furthermore, model-based approaches do not account for the structure imposed by the sampling. A more general fieldwise approach is to estimate all the wind vectors for all the WVCs simultaneously from all the measurements. This approach can account for structure of the wind field as well as structure imposed by the sampling in the wind retrieval step. Williams and Long in 2010 developed a fieldwise retrieval method based on maximum a posteriori estimation (MAP). This MAP approach can be extended to perform a noise versus resolution trade-off, and deal with ambiguity selection. This paper extends the fieldwise MAP estimation approach and investigates both the noise versus resolution trade-off as well as ambiguity removal in the fieldwise wind retrieval step. The method is then applied to the Sea Winds scatterometer and the results are analyzed. This paper extends the fieldwise MAP estimation approach and investigates both the noise versus resolution trade-off as well as ambiguity removal in the fieldwise wind retrieval step. The method is then applied to the Sea Winds scatterometer and the results are analyzed.

  17. Cancer risk assessment of polycyclic aromatic hydrocarbon contaminated soils determined using bioassay-derived levels of benzo[a]pyrene equivalents.

    PubMed

    Lemieux, Christine L; Long, Alexandra S; Lambert, Iain B; Lundstedt, Staffan; Tysklind, Mats; White, Paul A

    2015-02-03

    Here we evaluate the excess lifetime cancer risk (ELCR) posed by 10 PAH-contaminated soils using (i) the currently advocated, targeted chemical-specific approach that assumes dose additivity for carcinogenic PAHs and (ii) a bioassay-based approach that employs the in vitro mutagenic activity of the soil fractions to determine levels of benzo[a]pyrene equivalents and, by extension, ELCR. Mutagenic activity results are presented in our companion paper.1 The results show that ELCR values for the PAH-containing fractions, determined using the chemical-specific approach, are generally (i.e., 8 out of 10) greater than those calculated using the bioassay-based approach; most are less than 5-fold greater. Only two chemical-specific ELCR estimates are less than their corresponding bioassay-derived values; differences are less than 10%. The bioassay-based approach, which permits estimation of ELCR without a priori knowledge of mixture composition, proved to be a useful tool to evaluate the chemical-specific approach. The results suggest that ELCR estimates for complex PAH mixtures determined using a targeted, chemical-specific approach are reasonable, albeit conservative. Calculated risk estimates still depend on contentious PEFs and cancer slope factors. Follow-up in vivo mutagenicity assessments will be required to validate the results and their relevance for human health risk assessment of PAH-contaminated soils.

  18. Multi-Mode Estimation for Small Fixed Wing Unmanned Aerial Vehicle Localization Based on a Linear Matrix Inequality Approach

    PubMed Central

    Elzoghby, Mostafa; Li, Fu; Arafa, Ibrahim. I.; Arif, Usman

    2017-01-01

    Information fusion from multiple sensors ensures the accuracy and robustness of a navigation system, especially in the absence of global positioning system (GPS) data which gets degraded in many cases. A way to deal with multi-mode estimation for a small fixed wing unmanned aerial vehicle (UAV) localization framework is proposed, which depends on utilizing a Luenberger observer-based linear matrix inequality (LMI) approach. The proposed estimation technique relies on the interaction between multiple measurement modes and a continuous observer. The state estimation is performed in a switching environment between multiple active sensors to exploit the available information as much as possible, especially in GPS-denied environments. Luenberger observer-based projection is implemented as a continuous observer to optimize the estimation performance. The observer gain might be chosen by solving a Lyapunov equation by means of a LMI algorithm. Convergence is achieved by utilizing the linear matrix inequality (LMI), based on Lyapunov stability which keeps the dynamic estimation error bounded by selecting the observer gain matrix (L). Simulation results are presented for a small UAV fixed wing localization problem. The results obtained using the proposed approach are compared with a single mode Extended Kalman Filter (EKF). Simulation results are presented to demonstrate the viability of the proposed strategy. PMID:28420214

  19. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  20. A revised load estimation procedure for the Susquehanna, Potomac, Patuxent, and Choptank rivers

    USGS Publications Warehouse

    Yochum, Steven E.

    2000-01-01

    The U.S. Geological Survey?s Chesapeake Bay River Input Program has updated the nutrient and suspended-sediment load data base for the Susquehanna, Potomac, Patuxent, and Choptank Rivers using a multiple-window, center-estimate regression methodology. The revised method optimizes the seven-parameter regression approach that has been used historically by the program. The revised method estimates load using the fifth or center year of a sliding 9-year window. Each year a new model is run for each site and constituent, the most recent year is added, and the previous 4 years of estimates are updated. The fifth year in the 9-year window is considered the best estimate and is kept in the data base. The last year of estimation shows the most change from the previous year?s estimate and this change approaches a minimum at the fifth year. Differences between loads computed using this revised methodology and the loads populating the historical data base have been noted but the load estimates do not typically change drastically. The data base resulting from the application of this revised methodology is populated by annual and monthly load estimates that are known with greater certainty than in the previous load data base.

  1. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters.

    PubMed

    Chung, SungWon; Lu, Ying; Henry, Roland G

    2006-11-01

    Bootstrap is an empirical non-parametric statistical technique based on data resampling that has been used to quantify uncertainties of diffusion tensor MRI (DTI) parameters, useful in tractography and in assessing DTI methods. The current bootstrap method (repetition bootstrap) used for DTI analysis performs resampling within the data sharing common diffusion gradients, requiring multiple acquisitions for each diffusion gradient. Recently, wild bootstrap was proposed that can be applied without multiple acquisitions. In this paper, two new approaches are introduced called residual bootstrap and repetition bootknife. We show that repetition bootknife corrects for the large bias present in the repetition bootstrap method and, therefore, better estimates the standard errors. Like wild bootstrap, residual bootstrap is applicable to single acquisition scheme, and both are based on regression residuals (called model-based resampling). Residual bootstrap is based on the assumption that non-constant variance of measured diffusion-attenuated signals can be modeled, which is actually the assumption behind the widely used weighted least squares solution of diffusion tensor. The performances of these bootstrap approaches were compared in terms of bias, variance, and overall error of bootstrap-estimated standard error by Monte Carlo simulation. We demonstrate that residual bootstrap has smaller biases and overall errors, which enables estimation of uncertainties with higher accuracy. Understanding the properties of these bootstrap procedures will help us to choose the optimal approach for estimating uncertainties that can benefit hypothesis testing based on DTI parameters, probabilistic fiber tracking, and optimizing DTI methods.

  2. Hybrid Visible Light and Ultrasound-Based Sensor for Distance Estimation

    PubMed Central

    Rabadan, Jose; Guerra, Victor; Rodríguez, Rafael; Rufo, Julio; Luna-Rivera, Martin; Perez-Jimenez, Rafael

    2017-01-01

    Distance estimation plays an important role in location-based services, which has become very popular in recent years. In this paper, a new short range cricket sensor-based approach is proposed for indoor location applications. This solution uses Time Difference of Arrival (TDoA) between an optical and an ultrasound signal which are transmitted simultaneously, to estimate the distance from the base station to the mobile receiver. The measurement of the TDoA at the mobile receiver endpoint is proportional to the distance. The use of optical and ultrasound signals instead of the conventional radio wave signal makes the proposed approach suitable for environments with high levels of electromagnetic interference or where the propagation of radio frequencies is entirely restricted. Furthermore, unlike classical cricket systems, a double-way measurement procedure is introduced, allowing both the base station and mobile node to perform distance estimation simultaneously. PMID:28208584

  3. Multimethod, multistate Bayesian hierarchical modeling approach for use in regional monitoring of wolves.

    PubMed

    Jiménez, José; García, Emilio J; Llaneza, Luis; Palacios, Vicente; González, Luis Mariano; García-Domínguez, Francisco; Múñoz-Igualada, Jaime; López-Bao, José Vicente

    2016-08-01

    In many cases, the first step in large-carnivore management is to obtain objective, reliable, and cost-effective estimates of population parameters through procedures that are reproducible over time. However, monitoring predators over large areas is difficult, and the data have a high level of uncertainty. We devised a practical multimethod and multistate modeling approach based on Bayesian hierarchical-site-occupancy models that combined multiple survey methods to estimate different population states for use in monitoring large predators at a regional scale. We used wolves (Canis lupus) as our model species and generated reliable estimates of the number of sites with wolf reproduction (presence of pups). We used 2 wolf data sets from Spain (Western Galicia in 2013 and Asturias in 2004) to test the approach. Based on howling surveys, the naïve estimation (i.e., estimate based only on observations) of the number of sites with reproduction was 9 and 25 sites in Western Galicia and Asturias, respectively. Our model showed 33.4 (SD 9.6) and 34.4 (3.9) sites with wolf reproduction, respectively. The number of occupied sites with wolf reproduction was 0.67 (SD 0.19) and 0.76 (0.11), respectively. This approach can be used to design more cost-effective monitoring programs (i.e., to define the sampling effort needed per site). Our approach should inspire well-coordinated surveys across multiple administrative borders and populations and lead to improved decision making for management of large carnivores on a landscape level. The use of this Bayesian framework provides a simple way to visualize the degree of uncertainty around population-parameter estimates and thus provides managers and stakeholders an intuitive approach to interpreting monitoring results. Our approach can be widely applied to large spatial scales in wildlife monitoring where detection probabilities differ between population states and where several methods are being used to estimate different population parameters. © 2016 Society for Conservation Biology.

  4. Comparison between deterministic and statistical wavelet estimation methods through predictive deconvolution: Seismic to well tie example from the North Sea

    NASA Astrophysics Data System (ADS)

    de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode

    2017-01-01

    Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.

  5. An "intelligent" approach based on side-by-side cascade-correlation neural networks for estimating thermophysical properties from photothermal responses

    NASA Astrophysics Data System (ADS)

    Grieu, Stéphane; Faugeroux, Olivier; Traoré, Adama; Claudet, Bernard; Bodnar, Jean-Luc

    2015-01-01

    In the present paper, an artificial-intelligence-based approach dealing with the estimation of thermophysical properties is designed and evaluated. This new and "intelligent" approach makes use of photothermal responses obtained when subjecting materials to a light flux. So, the main objective of the present work was to estimate simultaneously both the thermal diffusivity and conductivity of materials, from front-face or rear-face photothermal responses to pseudo random binary signals. To this end, we used side-by-side feedforward neural networks trained with the cascade-correlation algorithm. In addition, computation time was a key point to consider. That is why the developed algorithms are computationally tractable.

  6. A General and Flexible Approach to Estimating the Social Relations Model Using Bayesian Methods

    ERIC Educational Resources Information Center

    Ludtke, Oliver; Robitzsch, Alexander; Kenny, David A.; Trautwein, Ulrich

    2013-01-01

    The social relations model (SRM) is a conceptual, methodological, and analytical approach that is widely used to examine dyadic behaviors and interpersonal perception within groups. This article introduces a general and flexible approach to estimating the parameters of the SRM that is based on Bayesian methods using Markov chain Monte Carlo…

  7. Shape-based approach for the estimation of individual facial mimics in craniofacial surgery planning

    NASA Astrophysics Data System (ADS)

    Gladilin, Evgeny; Zachow, Stefan; Deuflhard, Peter; Hege, Hans-Christian

    2002-05-01

    Besides the static soft tissue prediction, the estimation of basic facial emotion expressions is another important criterion for the evaluation of craniofacial surgery planning. For a realistic simulation of facial mimics, an adequate biomechanical model of soft tissue including the mimic musculature is needed. In this work, we present an approach for the modeling of arbitrarily shaped muscles and the estimation of basic individual facial mimics, which is based on the geometrical model derived from the individual tomographic data and the general finite element modeling of soft tissue biomechanics.

  8. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    NASA Astrophysics Data System (ADS)

    Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven

    2016-05-01

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.

  9. Big Numbers about Small Children: Estimating the Economic Benefits of Addressing Undernutrition.

    PubMed

    Alderman, Harold; Behrman, Jere R; Puett, Chloe

    2017-02-01

    Different approaches have been used to estimate the economic benefits of reducing undernutrition and to estimate the costs of investing in such programs on a global scale. While many of these studies are ultimately based on evidence from well-designed efficacy trials, all require a number of assumptions to project the impact of such trials to larger populations and to translate the value of the expected improvement in nutritional status into economic terms. This paper provides a short critique of some approaches to estimating the benefits of investments in child nutrition and then presents an alternative set of estimates based on different core data. These new estimates reinforce the basic conclusions of the existing literature: the economic value from reducing undernutrition in undernourished populations is likely to be substantial.

  10. Assessing an ensemble Kalman filter inference of Manning's n coefficient of an idealized tidal inlet against a polynomial chaos-based MCMC

    NASA Astrophysics Data System (ADS)

    Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim

    2017-08-01

    Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.

  11. Virtual Sensor for Kinematic Estimation of Flexible Links in Parallel Robots

    PubMed Central

    Cabanes, Itziar; Mancisidor, Aitziber; Pinto, Charles

    2017-01-01

    The control of flexible link parallel manipulators is still an open area of research, endpoint trajectory tracking being one of the main challenges in this type of robot. The flexibility and deformations of the limbs make the estimation of the Tool Centre Point (TCP) position a challenging one. Authors have proposed different approaches to estimate this deformation and deduce the location of the TCP. However, most of these approaches require expensive measurement systems or the use of high computational cost integration methods. This work presents a novel approach based on a virtual sensor which can not only precisely estimate the deformation of the flexible links in control applications (less than 2% error), but also its derivatives (less than 6% error in velocity and 13% error in acceleration) according to simulation results. The validity of the proposed Virtual Sensor is tested in a Delta Robot, where the position of the TCP is estimated based on the Virtual Sensor measurements with less than a 0.03% of error in comparison with the flexible approach developed in ADAMS Multibody Software. PMID:28832510

  12. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    PubMed

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  13. Quantifying uncertainty in discharge measurements: A new approach

    USGS Publications Warehouse

    Kiang, J.E.; Cohn, T.A.; Mason, R.R.

    2009-01-01

    The accuracy of discharge measurements using velocity meters and the velocity-area method is typically assessed based on empirical studies that may not correspond to conditions encountered in practice. In this paper, a statistical approach for assessing uncertainty based on interpolated variance estimation (IVE) is introduced. The IVE method quantifies all sources of random uncertainty in the measured data. This paper presents results employing data from sites where substantial over-sampling allowed for the comparison of IVE-estimated uncertainty and observed variability among repeated measurements. These results suggest that the IVE approach can provide approximate estimates of measurement uncertainty. The use of IVE to estimate the uncertainty of a discharge measurement would provide the hydrographer an immediate determination of uncertainty and help determine whether there is a need for additional sampling in problematic river cross sections. ?? 2009 ASCE.

  14. Ground target recognition using rectangle estimation.

    PubMed

    Grönwall, Christina; Gustafsson, Fredrik; Millnert, Mille

    2006-11-01

    We propose a ground target recognition method based on 3-D laser radar data. The method handles general 3-D scattered data. It is based on the fact that man-made objects of complex shape can be decomposed to a set of rectangles. The ground target recognition method consists of four steps; 3-D size and orientation estimation, target segmentation into parts of approximately rectangular shape, identification of segments that represent the target's functional/main parts, and target matching with CAD models. The core in this approach is rectangle estimation. The performance of the rectangle estimation method is evaluated statistically using Monte Carlo simulations. A case study on tank recognition is shown, where 3-D data from four fundamentally different types of laser radar systems are used. Although the approach is tested on rather few examples, we believe that the approach is promising.

  15. A Global Carbon Assimilation System using a modified EnKF assimilation method

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Zheng, X.; Chen, Z.; Dan, B.; Chen, J. M.; Yi, X.; Wang, L.; Wu, G.

    2014-10-01

    A Global Carbon Assimilation System based on Ensemble Kalman filter (GCAS-EK) is developed for assimilating atmospheric CO2 abundance data into an ecosystem model to simultaneously estimate the surface carbon fluxes and atmospheric CO2 distribution. This assimilation approach is based on the ensemble Kalman filter (EnKF), but with several new developments, including using analysis states to iteratively estimate ensemble forecast errors, and a maximum likelihood estimation of the inflation factors of the forecast and observation errors. The proposed assimilation approach is tested in observing system simulation experiments and then used to estimate the terrestrial ecosystem carbon fluxes and atmospheric CO2 distributions from 2002 to 2008. The results showed that this assimilation approach can effectively reduce the biases and uncertainties of the carbon fluxes simulated by the ecosystem model.

  16. Regional economic activity and absenteeism: a new approach to estimating the indirect costs of employee productivity loss.

    PubMed

    Bankert, Brian; Coberley, Carter; Pope, James E; Wells, Aaron

    2015-02-01

    This paper presents a new approach to estimating the indirect costs of health-related absenteeism. Productivity losses related to employee absenteeism have negative business implications for employers and these losses effectively deprive the business of an expected level of employee labor. The approach herein quantifies absenteeism cost using an output per labor hour-based method and extends employer-level results to the region. This new approach was applied to the employed population of 3 health insurance carriers. The economic cost of absenteeism was estimated to be $6.8 million, $0.8 million, and $0.7 million on average for the 3 employers; regional losses were roughly twice the magnitude of employer-specific losses. The new approach suggests that costs related to absenteeism for high output per labor hour industries exceed similar estimates derived from application of the human capital approach. The materially higher costs under the new approach emphasize the importance of accurately estimating productivity losses.

  17. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    USGS Publications Warehouse

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  18. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    PubMed Central

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  19. Variable Selection for Confounder Control, Flexible Modeling and Collaborative Targeted Minimum Loss-Based Estimation in Causal Inference.

    PubMed

    Schnitzer, Mireille E; Lok, Judith J; Gruber, Susan

    2016-05-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010 [27]) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low- and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios.

  20. Robust small area estimation of poverty indicators using M-quantile approach (Case study: Sub-district level in Bogor district)

    NASA Astrophysics Data System (ADS)

    Girinoto, Sadik, Kusman; Indahwati

    2017-03-01

    The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.

  1. Using Neural Networks for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William

    1998-01-01

    This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.

  2. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  3. Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.

    PubMed

    Lee, Soojeong; Chang, Joon-Hyuk

    2017-11-01

    This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These indicate that the proposed method actually enhances the performance by 9.18% and 10.88% compared with the DBN-DNN single estimator. The proposed methodology improves the accuracy of BP estimation and reduces the uncertainty for BP estimation. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A comparison of operational remote sensing-based models for estimating crop evapotranspiration

    USDA-ARS?s Scientific Manuscript database

    The integration of remotely sensed data into models of actual evapotranspiration has allowed for the estimation of water consumption across agricultural regions. Two modeling approaches have been successfully applied. The first approach computes a surface energy balance using the radiometric surface...

  5. Estimation versus falsification approaches in sport and exercise science.

    PubMed

    Wilkinson, Michael; Winter, Edward M

    2018-05-22

    There has been a recent resurgence in debate about methods for statistical inference in science. The debate addresses statistical concepts and their impact on the value and meaning of analyses' outcomes. In contrast, philosophical underpinnings of approaches and the extent to which analytical tools match philosophical goals of the scientific method have received less attention. This short piece considers application of the scientific method to "what-is-the-influence-of x-on-y" type questions characteristic of sport and exercise science. We consider applications and interpretations of estimation versus falsification based statistical approaches and their value in addressing how much x influences y, and in measurement error and method agreement settings. We compare estimation using magnitude based inference (MBI) with falsification using null hypothesis significance testing (NHST), and highlight the limited value both of falsification and NHST to address problems in sport and exercise science. We recommend adopting an estimation approach, expressing the uncertainty of effects of x on y, and their practical/clinical value against pre-determined effect magnitudes using MBI.

  6. LC-MS/MS-based approach for obtaining exposure estimates of metabolites in early clinical trials using radioactive metabolites as reference standards.

    PubMed

    Zhang, Donglu; Raghavan, Nirmala; Chando, Theodore; Gambardella, Janice; Fu, Yunlin; Zhang, Duxi; Unger, Steve E; Humphreys, W Griffith

    2007-12-01

    An LC-MS/MS-based approach that employs authentic radioactive metabolites as reference standards was developed to estimate metabolite exposures in early drug development studies. This method is useful to estimate metabolite levels in studies done with non-radiolabeled compounds where metabolite standards are not available to allow standard LC-MS/MS assay development. A metabolite mixture obtained from an in vivo source treated with a radiolabeled compound was partially purified, quantified, and spiked into human plasma to provide metabolite standard curves. Metabolites were analyzed by LC-MS/MS using the specific mass transitions and an internal standard. The metabolite concentrations determined by this approach were found to be comparable to those determined by valid LC-MS/MS assays. This approach does not requires synthesis of authentic metabolites or the knowledge of exact structures of metabolites, and therefore should provide a useful method to obtain early estimates of circulating metabolites in early clinical or toxicological studies.

  7. Operational Retrievals of Evapotranspiration: Are we there yet?

    NASA Astrophysics Data System (ADS)

    Neale, C. M. U.; Anderson, M. C.; Hain, C.; Schull, M.; Isidro, C., Sr.; Goncalves, I. Z.

    2017-12-01

    Remote sensing based retrievals of evapotranspiration (ET) have progressed significantly over the last two decades with the improvement of methods and algorithms and the availability of multiple satellite sensors with shortwave and thermal infrared bands on polar orbiting platforms. The modeling approaches include simpler vegetation index (VI) based methods such as the reflectance-based crop coefficient approach coupled with surface reference evapotranspiration estimates to derive actual evapotranspiration of crops or, direct inputs to the Penman-Monteith equation through VI relationships with certain input variables. Methods that are more complex include one-layer or two-layer energy balance approaches that make use of both shortwave and longwave spectral band information to estimate different inputs to the energy balance equation. These models mostly differ in the estimation of sensible heat fluxes. For continental and global scale applications, other satellite-based products such as solar radiation, vegetation leaf area and cover are used as inputs, along with gridded re-analysis weather information. This presentation will review the state-of-the-art in satellite-based evapotranspiration estimation, giving examples of existing efforts to obtain operational ET retrievals over continental and global scales and discussing difficulties and challenges.

  8. Estimation of Faults in DC Electrical Power System

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott

    2009-01-01

    This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.

  9. An improved thermo-time domain reflectometry method for determination of ice contents in partially frozen soils

    NASA Astrophysics Data System (ADS)

    Tian, Zhengchao; Ren, Tusheng; Kojima, Yuki; Lu, Yili; Horton, Robert; Heitman, Joshua L.

    2017-12-01

    Measuring ice contents (θi) in partially frozen soils is important for both engineering and environmental applications. Thermo-time domain reflectometry (thermo-TDR) probes can be used to determine θi based on the relationship between θi and soil heat capacity (C). This approach, however, is accurate in partially frozen soils only at temperatures below -5 °C, and it performs poorly on clayey soils. In this study, we present and evaluate a soil thermal conductivity (λ)-based approach to determine θi with thermo-TDR probes. Bulk soil λ is described with a simplified de Vries model that relates λ to θi. From this model, θi is estimated using inverse modeling of thermo-TDR measured λ. Soil bulk density (ρb) and thermo-TDR measured liquid water content (θl) are also needed for both C-based and λ-based approaches. A theoretical analysis is performed to quantify the sensitivity of C-based and λ-based θi estimates to errors in these input parameters. The analysis indicates that the λ-based approach is less sensitive to errors in the inputs (C, λ, θl, and ρb) than is the C-based approach when the same or the same percentage errors occur. Further evaluations of the C-based and λ-based approaches are made using experimentally determined θi at different temperatures on eight soils with various textures, total water contents, and ρb. The results show that the λ-based thermo-TDR approach significantly improves the accuracy of θi measurements at temperatures ≤-5 °C. The root mean square errors of λ-based θi estimates are only half those of C-based θi. At temperatures of -1 and -2 °C, the λ-based thermo-TDR approach also provides reasonable θi, while the C-based approach fails. We conclude that the λ-based thermo-TDR method can reliably determine θi even at temperatures near the freezing point of water (0 °C).

  10. Estimating Oxygen Needs for Childhood Pneumonia in Developing Country Health Systems: A New Model for Expecting the Unexpected

    PubMed Central

    Bradley, Beverly D.; Howie, Stephen R. C.; Chan, Timothy C. Y.; Cheng, Yu-Ling

    2014-01-01

    Background Planning for the reliable and cost-effective supply of a health service commodity such as medical oxygen requires an understanding of the dynamic need or ‘demand’ for the commodity over time. In developing country health systems, however, collecting longitudinal clinical data for forecasting purposes is very difficult. Furthermore, approaches to estimating demand for supplies based on annual averages can underestimate demand some of the time by missing temporal variability. Methods A discrete event simulation model was developed to estimate variable demand for a health service commodity using the important example of medical oxygen for childhood pneumonia. The model is based on five key factors affecting oxygen demand: annual pneumonia admission rate, hypoxaemia prevalence, degree of seasonality, treatment duration, and oxygen flow rate. These parameters were varied over a wide range of values to generate simulation results for different settings. Total oxygen volume, peak patient load, and hours spent above average-based demand estimates were computed for both low and high seasons. Findings Oxygen demand estimates based on annual average values of demand factors can often severely underestimate actual demand. For scenarios with high hypoxaemia prevalence and degree of seasonality, demand can exceed average levels up to 68% of the time. Even for typical scenarios, demand may exceed three times the average level for several hours per day. Peak patient load is sensitive to hypoxaemia prevalence, whereas time spent at such peak loads is strongly influenced by degree of seasonality. Conclusion A theoretical study is presented whereby a simulation approach to estimating oxygen demand is used to better capture temporal variability compared to standard average-based approaches. This approach provides better grounds for health service planning, including decision-making around technologies for oxygen delivery. Beyond oxygen, this approach is widely applicable to other areas of resource and technology planning in developing country health systems. PMID:24587089

  11. A Robust Random Forest-Based Approach for Heart Rate Monitoring Using Photoplethysmography Signal Contaminated by Intense Motion Artifacts.

    PubMed

    Ye, Yalan; He, Wenwen; Cheng, Yunfei; Huang, Wenxia; Zhang, Zhilin

    2017-02-16

    The estimation of heart rate (HR) based on wearable devices is of interest in fitness. Photoplethysmography (PPG) is a promising approach to estimate HR due to low cost; however, it is easily corrupted by motion artifacts (MA). In this work, a robust approach based on random forest is proposed for accurately estimating HR from the photoplethysmography signal contaminated by intense motion artifacts, consisting of two stages. Stage 1 proposes a hybrid method to effectively remove MA with a low computation complexity, where two MA removal algorithms are combined by an accurate binary decision algorithm whose aim is to decide whether or not to adopt the second MA removal algorithm. Stage 2 proposes a random forest-based spectral peak-tracking algorithm, whose aim is to locate the spectral peak corresponding to HR, formulating the problem of spectral peak tracking into a pattern classification problem. Experiments on the PPG datasets including 22 subjects used in the 2015 IEEE Signal Processing Cup showed that the proposed approach achieved the average absolute error of 1.65 beats per minute (BPM) on the 22 PPG datasets. Compared to state-of-the-art approaches, the proposed approach has better accuracy and robustness to intense motion artifacts, indicating its potential use in wearable sensors for health monitoring and fitness tracking.

  12. Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.

    PubMed

    Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona

    2016-05-31

    Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.

  13. Geological Carbon Sequestration Storage Resource Estimates for the Ordovician St. Peter Sandstone, Illinois and Michigan Basins, USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, David; Ellett, Kevin; Leetaru, Hannes

    The Cambro-Ordovician strata of the Midwest of the United States is a primary target for potential geological storage of CO2 in deep saline formations. The objective of this project is to develop a comprehensive evaluation of the Cambro-Ordovician strata in the Illinois and Michigan Basins above the basal Mount Simon Sandstone since the Mount Simon is the subject of other investigations including a demonstration-scale injection at the Illinois Basin Decatur Project. The primary reservoir targets investigated in this study are the middle Ordovician St Peter Sandstone and the late Cambrian to early Ordovician Knox Group carbonates. The topic of thismore » report is a regional-scale evaluation of the geologic storage resource potential of the St Peter Sandstone in both the Illinois and Michigan Basins. Multiple deterministic-based approaches were used in conjunction with the probabilistic-based storage efficiency factors published in the DOE methodology to estimate the carbon storage resource of the formation. Extensive data sets of core analyses and wireline logs were compiled to develop the necessary inputs for volumetric calculations. Results demonstrate how the range in uncertainty of storage resource estimates varies as a function of data availability and quality, and the underlying assumptions used in the different approaches. In the simplest approach, storage resource estimates were calculated from mapping the gross thickness of the formation and applying a single estimate of the effective mean porosity of the formation. Results from this approach led to storage resource estimates ranging from 3.3 to 35.1 Gt in the Michigan Basin, and 1.0 to 11.0 Gt in the Illinois Basin at the P10 and P90 probability level, respectively. The second approach involved consideration of the diagenetic history of the formation throughout the two basins and used depth-dependent functions of porosity to derive a more realistic spatially variable model of porosity rather than applying a single estimate of porosity throughout the entire potential reservoir domains. The second approach resulted in storage resource estimates of 3.0 to 31.6 Gt in the Michigan Basin, and 0.6 to 6.1 Gt in the Illinois Basin. The third approach attempted to account for the local-scale variability in reservoir quality as a function of both porosity and permeability by using core and log analyses to calculate explicitly the net effective porosity at multiple well locations, and interpolate those results throughout the two basins. This approach resulted in storage resource estimates of 10.7 to 34.7 Gt in the Michigan Basin, and 11.2 to 36.4 Gt in the Illinois Basin. A final approach used advanced reservoir characterization as the most sophisticated means to estimating storage resource by defining reservoir properties for multiple facies within the St Peter formation. This approach was limited to the Michigan Basin since the Illinois Basin data set did not have the requisite level of data quality and sampling density to support such an analysis. Results from this approach led to storage resource estimates of 15.4 Gt to 50.1 Gt for the Michigan Basin. The observed variability in results from the four different approaches is evaluated in the context of data and methodological constraints, leading to the conclusion that the storage resource estimates from the first two approaches may be conservative, whereas the net porosity based approaches may over-estimate the resource.« less

  14. State Estimation for Tensegrity Robots

    NASA Technical Reports Server (NTRS)

    Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas

    2016-01-01

    Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.

  15. Ground-based remote sensing of HDO/H2O ratio profiles: introduction and validation of an innovative retrieval approach

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Hase, F.; Blumenstock, T.

    2006-10-01

    We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.

  16. Ground-based remote sensing of HDO/H2O ratio profiles: introduction and validation of an innovative retrieval approach

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Hase, F.; Blumenstock, T.

    2006-06-01

    We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.

  17. Estimation of cylinder orientation in three-dimensional point cloud using angular distance-based optimization

    NASA Astrophysics Data System (ADS)

    Su, Yun-Ting; Hu, Shuowen; Bethel, James S.

    2017-05-01

    Light detection and ranging (LIDAR) has become a widely used tool in remote sensing for mapping, surveying, modeling, and a host of other applications. The motivation behind this work is the modeling of piping systems in industrial sites, where cylinders are the most common primitive or shape. We focus on cylinder parameter estimation in three-dimensional point clouds, proposing a mathematical formulation based on angular distance to determine the cylinder orientation. We demonstrate the accuracy and robustness of the technique on synthetically generated cylinder point clouds (where the true axis orientation is known) as well as on real LIDAR data of piping systems. The proposed algorithm is compared with a discrete space Hough transform-based approach as well as a continuous space inlier approach, which iteratively discards outlier points to refine the cylinder parameter estimates. Results show that the proposed method is more computationally efficient than the Hough transform approach and is more accurate than both the Hough transform approach and the inlier method.

  18. Post-classification approaches to estimating change in forest area using remotely sense auxiliary data.

    Treesearch

    Ronald E. McRoberts

    2014-01-01

    Multiple remote sensing-based approaches to estimating gross afforestation, gross deforestation, and net deforestation are possible. However, many of these approaches have severe data requirements in the form of long time series of remotely sensed data and/or large numbers of observations of land cover change to train classifiers and assess the accuracy of...

  19. Semiparametric Estimation of the Impacts of Longitudinal Interventions on Adolescent Obesity using Targeted Maximum-Likelihood: Accessible Estimation with the ltmle Package

    PubMed Central

    Decker, Anna L.; Hubbard, Alan; Crespi, Catherine M.; Seto, Edmund Y.W.; Wang, May C.

    2015-01-01

    While child and adolescent obesity is a serious public health concern, few studies have utilized parameters based on the causal inference literature to examine the potential impacts of early intervention. The purpose of this analysis was to estimate the causal effects of early interventions to improve physical activity and diet during adolescence on body mass index (BMI), a measure of adiposity, using improved techniques. The most widespread statistical method in studies of child and adolescent obesity is multi-variable regression, with the parameter of interest being the coefficient on the variable of interest. This approach does not appropriately adjust for time-dependent confounding, and the modeling assumptions may not always be met. An alternative parameter to estimate is one motivated by the causal inference literature, which can be interpreted as the mean change in the outcome under interventions to set the exposure of interest. The underlying data-generating distribution, upon which the estimator is based, can be estimated via a parametric or semi-parametric approach. Using data from the National Heart, Lung, and Blood Institute Growth and Health Study, a 10-year prospective cohort study of adolescent girls, we estimated the longitudinal impact of physical activity and diet interventions on 10-year BMI z-scores via a parameter motivated by the causal inference literature, using both parametric and semi-parametric estimation approaches. The parameters of interest were estimated with a recently released R package, ltmle, for estimating means based upon general longitudinal treatment regimes. We found that early, sustained intervention on total calories had a greater impact than a physical activity intervention or non-sustained interventions. Multivariable linear regression yielded inflated effect estimates compared to estimates based on targeted maximum-likelihood estimation and data-adaptive super learning. Our analysis demonstrates that sophisticated, optimal semiparametric estimation of longitudinal treatment-specific means via ltmle provides an incredibly powerful, yet easy-to-use tool, removing impediments for putting theory into practice. PMID:26046009

  20. A simulation-based approach for estimating premining water quality: Red Mountain Creek, Colorado

    USGS Publications Warehouse

    Runkel, Robert L.; Kimball, Briant A; Walton-Day, Katherine; Verplanck, Philip L.

    2007-01-01

    Regulatory agencies are often charged with the task of setting site-specific numeric water quality standards for impaired streams. This task is particularly difficult for streams draining highly mineralized watersheds with past mining activity. Baseline water quality data obtained prior to mining are often non-existent and application of generic water quality standards developed for unmineralized watersheds is suspect given the geology of most watersheds affected by mining. Various approaches have been used to estimate premining conditions, but none of the existing approaches rigorously consider the physical and geochemical processes that ultimately determine instream water quality. An approach based on simulation modeling is therefore proposed herein. The approach utilizes synoptic data that provide spatially-detailed profiles of concentration, streamflow, and constituent load along the study reach. This field data set is used to calibrate a reactive stream transport model that considers the suite of physical and geochemical processes that affect constituent concentrations during instream transport. A key input to the model is the quality and quantity of waters entering the study reach. This input is based on chemical analyses available from synoptic sampling and observed increases in streamflow along the study reach. Given the calibrated model, additional simulations are conducted to estimate premining conditions. In these simulations, the chemistry of mining-affected sources is replaced with the chemistry of waters that are thought to be unaffected by mining (proximal, premining analogues). The resultant simulations provide estimates of premining water quality that reflect both the reduced loads that were present prior to mining and the processes that affect these loads as they are transported downstream. This simulation-based approach is demonstrated using data from Red Mountain Creek, Colorado, a small stream draining a heavily-mined watershed. Model application to the premining problem for Red Mountain Creek is based on limited field reconnaissance and chemical analyses; additional field work and analyses may be needed to develop definitive, quantitative estimates of premining water quality.

  1. Rank-preserving regression: a more robust rank regression model against outliers.

    PubMed

    Chen, Tian; Kowalski, Jeanne; Chen, Rui; Wu, Pan; Zhang, Hui; Feng, Changyong; Tu, Xin M

    2016-08-30

    Mean-based semi-parametric regression models such as the popular generalized estimating equations are widely used to improve robustness of inference over parametric models. Unfortunately, such models are quite sensitive to outlying observations. The Wilcoxon-score-based rank regression (RR) provides more robust estimates over generalized estimating equations against outliers. However, the RR and its extensions do not sufficiently address missing data arising in longitudinal studies. In this paper, we propose a new approach to address outliers under a different framework based on the functional response models. This functional-response-model-based alternative not only addresses limitations of the RR and its extensions for longitudinal data, but, with its rank-preserving property, even provides more robust estimates than these alternatives. The proposed approach is illustrated with both real and simulated data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Combined Yamamoto approach for simultaneous estimation of adsorption isotherm and kinetic parameters in ion-exchange chromatography.

    PubMed

    Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand

    2015-09-25

    Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Software risk estimation and management techniques at JPL

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Lum, K.

    2002-01-01

    In this talk we will discuss how uncertainty has been incorporated into the JPL software model, probabilistic-based estimates, and how risk is addressed, how cost risk is currently being explored via a variety of approaches, from traditional risk lists, to detailed WBS-based risk estimates to the Defect Detection and Prevention (DDP) tool.

  4. Methodology for the Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Cancer.gov

    This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.

  5. Body Weight Estimation for Dose-Finding and Health Monitoring of Lying, Standing and Walking Patients Based on RGB-D Data

    PubMed Central

    May, Stefan

    2018-01-01

    This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients. PMID:29695098

  6. Body Weight Estimation for Dose-Finding and Health Monitoring of Lying, Standing and Walking Patients Based on RGB-D Data.

    PubMed

    Pfitzner, Christian; May, Stefan; Nüchter, Andreas

    2018-04-24

    This paper describes the estimation of the body weight of a person in front of an RGB-D camera. A survey of different methods for body weight estimation based on depth sensors is given. First, an estimation of people standing in front of a camera is presented. Second, an approach based on a stream of depth images is used to obtain the body weight of a person walking towards a sensor. The algorithm first extracts features from a point cloud and forwards them to an artificial neural network (ANN) to obtain an estimation of body weight. Besides the algorithm for the estimation, this paper further presents an open-access dataset based on measurements from a trauma room in a hospital as well as data from visitors of a public event. In total, the dataset contains 439 measurements. The article illustrates the efficiency of the approach with experiments with persons lying down in a hospital, standing persons, and walking persons. Applicable scenarios for the presented algorithm are body weight-related dosing of emergency patients.

  7. Bayesian estimation of seasonal course of canopy leaf area index from hyperspectral satellite data

    NASA Astrophysics Data System (ADS)

    Varvia, Petri; Rautiainen, Miina; Seppänen, Aku

    2018-03-01

    In this paper, Bayesian inversion of a physically-based forest reflectance model is investigated to estimate of boreal forest canopy leaf area index (LAI) from EO-1 Hyperion hyperspectral data. The data consist of multiple forest stands with different species compositions and structures, imaged in three phases of the growing season. The Bayesian estimates of canopy LAI are compared to reference estimates based on a spectral vegetation index. The forest reflectance model contains also other unknown variables in addition to LAI, for example leaf single scattering albedo and understory reflectance. In the Bayesian approach, these variables are estimated simultaneously with LAI. The feasibility and seasonal variation of these estimates is also examined. Credible intervals for the estimates are also calculated and evaluated. The results show that the Bayesian inversion approach is significantly better than using a comparable spectral vegetation index regression.

  8. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  9. Efficient estimation of the attributable fraction when there are monotonicity constraints and interactions.

    PubMed

    Traskin, Mikhail; Wang, Wei; Ten Have, Thomas R; Small, Dylan S

    2013-01-01

    The PAF for an exposure is the fraction of disease cases in a population that can be attributed to that exposure. One method of estimating the PAF involves estimating the probability of having the disease given the exposure and confounding variables. In many settings, the exposure will interact with the confounders and the confounders will interact with each other. Also, in many settings, the probability of having the disease is thought, based on subject matter knowledge, to be a monotone increasing function of the exposure and possibly of some of the confounders. We develop an efficient approach for estimating logistic regression models with interactions and monotonicity constraints, and apply this approach to estimating the population attributable fraction (PAF). Our approach produces substantially more accurate estimates of the PAF in some settings than the usual approach which uses logistic regression without monotonicity constraints.

  10. Parameter Estimation for Geoscience Applications Using a Measure-Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Dawson, C.; Butler, T.; Mattis, S. A.; Graham, L.; Westerink, J. J.; Vesselinov, V. V.; Estep, D.

    2016-12-01

    Effective modeling of complex physical systems arising in the geosciences is dependent on knowing parameters which are often difficult or impossible to measure in situ. In this talk we focus on two such problems, estimating parameters for groundwater flow and contaminant transport, and estimating parameters within a coastal ocean model. The approach we will describe, proposed by collaborators D. Estep, T. Butler and others, is based on a novel stochastic inversion technique based on measure theory. In this approach, given a probability space on certain observable quantities of interest, one searches for the sets of highest probability in parameter space which give rise to these observables. When viewed as mappings between sets, the stochastic inversion problem is well-posed in certain settings, but there are computational challenges related to the set construction. We will focus the talk on estimating scalar parameters and fields in a contaminant transport setting, and in estimating bottom friction in a complicated near-shore coastal application.

  11. Combining computer adaptive testing technology with cognitively diagnostic assessment.

    PubMed

    McGlohen, Meghan; Chang, Hua-Hua

    2008-08-01

    A major advantage of computerized adaptive testing (CAT) is that it allows the test to home in on an examinee's ability level in an interactive manner. The aim of the new area of cognitive diagnosis is to provide information about specific content areas in which an examinee needs help. The goal of this study was to combine the benefit of specific feedback from cognitively diagnostic assessment with the advantages of CAT. In this study, three approaches to combining these were investigated: (1) item selection based on the traditional ability level estimate (theta), (2) item selection based on the attribute mastery feedback provided by cognitively diagnostic assessment (alpha), and (3) item selection based on both the traditional ability level estimate (theta) and the attribute mastery feedback provided by cognitively diagnostic assessment (alpha). The results from these three approaches were compared for theta estimation accuracy, attribute mastery estimation accuracy, and item exposure control. The theta- and alpha-based condition outperformed the alpha-based condition regarding theta estimation, attribute mastery pattern estimation, and item exposure control. Both the theta-based condition and the theta- and alpha-based condition performed similarly with regard to theta estimation, attribute mastery estimation, and item exposure control, but the theta- and alpha-based condition has an additional advantage in that it uses the shadow test method, which allows the administrator to incorporate additional constraints in the item selection process, such as content balancing, item type constraints, and so forth, and also to select items on the basis of both the current theta and alpha estimates, which can be built on top of existing 3PL testing programs.

  12. Disaster debris estimation using high-resolution polarimetric stereo-SAR

    NASA Astrophysics Data System (ADS)

    Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki

    2016-10-01

    This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.

  13. Narrow-sense heritability estimation of complex traits using identity-by-descent information.

    PubMed

    Evans, Luke M; Tahmasbi, Rasool; Jones, Matt; Vrieze, Scott I; Abecasis, Gonçalo R; Das, Sayantan; Bjelland, Douglas W; de Candia, Teresa R; Yang, Jian; Goddard, Michael E; Visscher, Peter M; Keller, Matthew C

    2018-03-28

    Heritability is a fundamental parameter in genetics. Traditional estimates based on family or twin studies can be biased due to shared environmental or non-additive genetic variance. Alternatively, those based on genotyped or imputed variants typically underestimate narrow-sense heritability contributed by rare or otherwise poorly tagged causal variants. Identical-by-descent (IBD) segments of the genome share all variants between pairs of chromosomes except new mutations that have arisen since the last common ancestor. Therefore, relating phenotypic similarity to degree of IBD sharing among classically unrelated individuals is an appealing approach to estimating the near full additive genetic variance while possibly avoiding biases that can occur when modeling close relatives. We applied an IBD-based approach (GREML-IBD) to estimate heritability in unrelated individuals using phenotypic simulation with thousands of whole-genome sequences across a range of stratification, polygenicity levels, and the minor allele frequencies of causal variants (CVs). In simulations, the IBD-based approach produced unbiased heritability estimates, even when CVs were extremely rare, although precision was low. However, population stratification and non-genetic familial environmental effects shared across generations led to strong biases in IBD-based heritability. We used data on two traits in ~120,000 people from the UK Biobank to demonstrate that, depending on the trait and possible confounding environmental effects, GREML-IBD can be applied to very large genetic datasets to infer the contribution of very rare variants lost using other methods. However, we observed apparent biases in these real data, suggesting that more work may be required to understand and mitigate factors that influence IBD-based heritability estimates.

  14. Estimation of winter wheat canopy nitrogen density at different growth stages based on Multi-LUT approach

    NASA Astrophysics Data System (ADS)

    Li, Zhenhai; Li, Na; Li, Zhenhong; Wang, Jianwen; Liu, Chang

    2017-10-01

    Rapid real-time monitoring of wheat nitrogen (N) status is crucial for precision N management during wheat growth. In this study, Multi Lookup Table (Multi-LUT) approach based on the N-PROSAIL model parameters setting at different growth stages was constructed to estimating canopy N density (CND) in winter wheat. The results showed that the estimated CND was in line with with measured CND, with the determination coefficient (R2) and the corresponding root mean square error (RMSE) values of 0.80 and 1.16 g m-2, respectively. Time-consuming of one sample estimation was only 6 ms under the test machine with CPU configuration of Intel(R) Core(TM) i5-2430 @2.40GHz quad-core. These results confirmed the potential of using Multi-LUT approach for CND retrieval in winter wheat at different growth stages and under variables climatic conditions.

  15. Sparse EEG/MEG source estimation via a group lasso

    PubMed Central

    Lim, Michael; Ales, Justin M.; Cottereau, Benoit R.; Hastie, Trevor

    2017-01-01

    Non-invasive recordings of human brain activity through electroencephalography (EEG) or magnetoencelphalography (MEG) are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches. PMID:28604790

  16. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  17. Force estimation from OCT volumes using 3D CNNs.

    PubMed

    Gessert, Nils; Beringhoff, Jens; Otte, Christoph; Schlaefer, Alexander

    2018-07-01

    Estimating the interaction forces of instruments and tissue is of interest, particularly to provide haptic feedback during robot-assisted minimally invasive interventions. Different approaches based on external and integrated force sensors have been proposed. These are hampered by friction, sensor size, and sterilizability. We investigate a novel approach to estimate the force vector directly from optical coherence tomography image volumes. We introduce a novel Siamese 3D CNN architecture. The network takes an undeformed reference volume and a deformed sample volume as an input and outputs the three components of the force vector. We employ a deep residual architecture with bottlenecks for increased efficiency. We compare the Siamese approach to methods using difference volumes and two-dimensional projections. Data were generated using a robotic setup to obtain ground-truth force vectors for silicon tissue phantoms as well as porcine tissue. Our method achieves a mean average error of [Formula: see text] when estimating the force vector. Our novel Siamese 3D CNN architecture outperforms single-path methods that achieve a mean average error of [Formula: see text]. Moreover, the use of volume data leads to significantly higher performance compared to processing only surface information which achieves a mean average error of [Formula: see text]. Based on the tissue dataset, our methods shows good generalization in between different subjects. We propose a novel image-based force estimation method using optical coherence tomography. We illustrate that capturing the deformation of subsurface structures substantially improves force estimation. Our approach can provide accurate force estimates in surgical setups when using intraoperative optical coherence tomography.

  18. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  19. A learning framework for age rank estimation based on face images with scattering transform.

    PubMed

    Chang, Kuang-Yu; Chen, Chu-Song

    2015-03-01

    This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age estimation based on face images. The proposed approach exploits relative-order information among the age labels for rank prediction. In our approach, the age rank is obtained by aggregating a series of binary classification results, where cost sensitivities among the labels are introduced to improve the aggregating performance. In addition, we give a theoretical analysis on designing the cost of individual binary classifier so that the misranking cost can be bounded by the total misclassification costs. An efficient descriptor, scattering transform, which scatters the Gabor coefficients and pooled with Gaussian smoothing in multiple layers, is evaluated for facial feature extraction. We show that this descriptor is a generalization of conventional bioinspired features and is more effective for face-based age inference. Experimental results demonstrate that our method outperforms the state-of-the-art age estimation approaches.

  20. Principal axes estimation using the vibration modes of physics-based deformable models.

    PubMed

    Krinidis, Stelios; Chatzis, Vassilios

    2008-06-01

    This paper addresses the issue of accurate, effective, computationally efficient, fast, and fully automated 2-D object orientation and scaling factor estimation. The object orientation is calculated using object principal axes estimation. The approach relies on the object's frequency-based features. The frequency-based features used by the proposed technique are extracted by a 2-D physics-based deformable model that parameterizes the objects shape. The method was evaluated on synthetic and real images. The experimental results demonstrate the accuracy of the method, both in orientation and the scaling estimations.

  1. Microwave implementation of two-source energy balance approach for estimating evapotranspiration

    USDA-ARS?s Scientific Manuscript database

    A newly developed microwave (MW) land surface temperature (LST) product is used to effectively substitute thermal infrared (TIR) based LST in the two-source energy balance approach (TSEB) for estimating ET from space. This TSEB land surface scheme, used in the Atmosphere Land Exchange Inverse (ALEXI...

  2. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    PubMed

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  3. Variance Estimation for NAEP Data Using a Resampling-Based Approach: An Application of Cognitive Diagnostic Models. Research Report. ETS RR-10-26

    ERIC Educational Resources Information Center

    Hsieh, Chueh-an; Xu, Xueli; von Davier, Matthias

    2010-01-01

    This paper presents an application of a jackknifing approach to variance estimation of ability inferences for groups of students, using a multidimensional discrete model for item response data. The data utilized to demonstrate the approach come from the National Assessment of Educational Progress (NAEP). In contrast to the operational approach…

  4. A combined telemetry - tag return approach to estimate fishing and natural mortality rates of an estuarine fish

    USGS Publications Warehouse

    Bacheler, N.M.; Buckel, J.A.; Hightower, J.E.; Paramore, L.M.; Pollock, K.H.

    2009-01-01

    A joint analysis of tag return and telemetry data should improve estimates of mortality rates for exploited fishes; however, the combined approach has thus far only been tested in terrestrial systems. We tagged subadult red drum (Sciaenops ocellatus) with conventional tags and ultrasonic transmitters over 3 years in coastal North Carolina, USA, to test the efficacy of the combined telemetry - tag return approach. There was a strong seasonal pattern to monthly fishing mortality rate (F) estimates from both conventional and telemetry tags; highest F values occurred in fall months and lowest levels occurred during winter. Although monthly F values were similar in pattern and magnitude between conventional tagging and telemetry, information on F in the combined model came primarily from conventional tags. The estimated natural mortality rate (M) in the combined model was low (estimated annual rate ?? standard error: 0.04 ?? 0.04) and was based primarily upon the telemetry approach. Using high-reward tagging, we estimated different tag reporting rates for state agency and university tagging programs. The combined telemetry - tag return approach can be an effective approach for estimating F and M as long as several key assumptions of the model are met.

  5. Cuff-less PPG based continuous blood pressure monitoring: a smartphone based approach.

    PubMed

    Gaurav, Aman; Maheedhar, Maram; Tiwari, Vijay N; Narayanan, Rangavittal

    2016-08-01

    Cuff-less estimation of systolic (SBP) and diastolic (DBP) blood pressure is an efficient approach for non-invasive and continuous monitoring of an individual's vitals. Although pulse transit time (PTT) based approaches have been successful in estimating the systolic and diastolic blood pressures to a reasonable degree of accuracy, there is still scope for improvement in terms of accuracies. Moreover, PTT approach requires data from sensors placed at two different locations along with individual calibration of physiological parameters for deriving correct estimation of systolic and diastolic blood pressure (BP) and hence is not suitable for smartphone deployment. Heart Rate Variability is one of the extensively used non-invasive parameters to assess cardiovascular autonomic nervous system and is known to be associated with SBP and DBP indirectly. In this work, we propose a novel method to extract a comprehensive set of features by combining PPG signal based and Heart Rate Variability (HRV) related features using a single PPG sensor. Further, these features are fed into a DBP feedback based combinatorial neural network model to arrive at a common weighted average output of DBP and subsequently SBP. Our results show that using this current approach, an accuracy of ±6.8 mmHg for SBP and ±4.7 mmHg for DBP is achievable on 1,750,000 pulses extracted from a public database (comprising 3000 people). Since most of the smartphones are now equipped with PPG sensor, a mobile based cuff-less BP estimation will enable the user to monitor their BP as a vital parameter on demand. This will open new avenues towards development of pervasive and continuous BP monitoring systems leading to an early detection and prevention of cardiovascular diseases.

  6. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction

    PubMed Central

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-01-01

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469

  7. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction.

    PubMed

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-10-16

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions.

  8. Remaining lifetime modeling using State-of-Health estimation

    NASA Astrophysics Data System (ADS)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model has lower degrees of freedom. Both approaches rely on previously developed lifetime models each of them corresponding to predefined SoH. Concerning first approach, model selection is aided by state-machine-based algorithm. In the second approach, model selection conditioned by tracking an exceedance of predefined thresholds is concerned. The approach is applied to data generated from tribological systems. By calculating Root Squared Error (RSE), Mean Squared Error (MSE), and Absolute Error (ABE) the accuracy of proposed models/approaches is discussed along with related advantages and disadvantages. Verification of the approach is done using cross-fold validation, exchanging training and test data. It can be stated that the newly introduced approach based on data (denoted as data-based or data-driven) parametric models can be easily established providing detailed information about remaining useful/consumed lifetime valid for systems with constant load but stochastically occurred damage.

  9. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  10. Assessment of Person Fit Using Resampling-Based Approaches

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2016-01-01

    De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…

  11. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.

    PubMed

    Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben

    2018-02-22

    This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  12. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  13. The use of resighting data to estimate the rate of population growth of the snail kite in Florida

    USGS Publications Warehouse

    Dreitz, V.J.; Nichols, J.D.; Hines, J.E.; Bennetts, R.E.; Kitchens, W.M.; DeAngelis, D.L.

    2002-01-01

    The rate of population growth (lambda) is an important demographic parameter used to assess the viability of a population and to develop management and conservation agendas. We examined the use of resighting data to estimate lambda for the snail kite population in Florida from 1997-2000. The analyses consisted of (1) a robust design approach that derives an estimate of lambda from estimates of population size and (2) the Pradel (1996) temporal symmetry (TSM) approach that directly estimates lambda using an open-population capture-recapture model. Besides resighting data, both approaches required information on the number of unmarked individuals that were sighted during the sampling periods. The point estimates of lambda differed between the robust design and TSM approaches, but the 95% confidence intervals overlapped substantially. We believe the differences may be the result of sparse data and do not indicate the inappropriateness of either modelling technique. We focused on the results of the robust design because this approach provided estimates for all study years. Variation among these estimates was smaller than levels of variation among ad hoc estimates based on previously reported index statistics. We recommend that lambda of snail kites be estimated using capture-resighting methods rather than ad hoc counts.

  14. Comparison of regression coefficient and GIS-based methodologies for regional estimates of forest soil carbon stocks.

    PubMed

    Campbell, J Elliott; Moen, Jeremie C; Ney, Richard A; Schnoor, Jerald L

    2008-03-01

    Estimates of forest soil organic carbon (SOC) have applications in carbon science, soil quality studies, carbon sequestration technologies, and carbon trading. Forest SOC has been modeled using a regression coefficient methodology that applies mean SOC densities (mass/area) to broad forest regions. A higher resolution model is based on an approach that employs a geographic information system (GIS) with soil databases and satellite-derived landcover images. Despite this advancement, the regression approach remains the basis of current state and federal level greenhouse gas inventories. Both approaches are analyzed in detail for Wisconsin forest soils from 1983 to 2001, applying rigorous error-fixing algorithms to soil databases. Resulting SOC stock estimates are 20% larger when determined using the GIS method rather than the regression approach. Average annual rates of increase in SOC stocks are 3.6 and 1.0 million metric tons of carbon per year for the GIS and regression approaches respectively.

  15. Estimation of regional surface CO2 fluxes with GOSAT observations using two inverse modeling approaches

    NASA Astrophysics Data System (ADS)

    Maksyutov, Shamil; Takagi, Hiroshi; Belikov, Dmitry A.; Saeki, Tazu; Zhuravlev, Ruslan; Ganshin, Alexander; Lukyanov, Alexander; Yoshida, Yukio; Oshchepkov, Sergey; Bril, Andrey; Saito, Makoto; Oda, Tomohiro; Valsala, Vinu K.; Saito, Ryu; Andres, Robert J.; Conway, Thomas; Tans, Pieter; Yokota, Tatsuya

    2012-11-01

    Inverse estimation of surface C02 fluxes is performed with atmospheric transport model using ground-based and GOSAT observations. The NIES-retrieved C02 column mixing (Xc02) and column averaging kernel are provided by GOSAT Level 2 product v. 2.0 and PPDF-DOAS method. Monthly mean C02 fluxes for 64 regions are estimated together with a global mean offset between GOSAT data and ground-based data. We used the fixed-lag Kalman filter to infer monthly fluxes for 42 sub-continental terrestrial regions and 22 oceanic basins. We estimate fluxes and compare results obtained by two inverse modeling approaches. In basic approach adopted in GOSAT Level4 product v. 2.01, we use aggregation of the GOSAT observations into monthly mean over 5x5 degree grids, fluxes are estimated independently for each region, and NIES atmospheric transport model is used for forward simulation. In the alternative method, the model-observation misfit is estimated for each observation separately and fluxes are spatially correlated using EOF analysis of the simulated flux variability similar to geostatistical approach, while transport simulation is enhanced by coupling with a Lagrangian transport model Flexpart. Both methods use using the same set of prior fluxes and region maps. Daily net ecosystem exchange (NEE) is predicted by the Vegetation Integrative Simulator for Trace gases (VISIT) optimized to match seasonal cycle of the atmospheric C02 . Monthly ocean-atmosphere C02 fluxes are produced with an ocean pC02 data assimilation system. Biomass burning fluxes were provided by the Global Fire Emissions Database (GFED); and monthly fossil fuel C02 emissions are estimated with ODIAC inventory. The results of analyzing one year of the GOSAT data suggest that when both GOSAT and ground-based data are used together, fluxes in tropical and other remote regions with lower associated uncertainties are obtained than in the analysis using only ground-based data. With version 2.0 of L2 Xc02 the fluxes appear reasonable for many regions and seasons, however there is a need for improving the L2 bias correction, data filtering and the inverse modeling method to reduce estimated flux anomalies visible in some areas. We also observe that application of spatial flux correlations with EOF­ based approach reduces flux anomalies.

  16. Network modelling methods for FMRI.

    PubMed

    Smith, Stephen M; Miller, Karla L; Salimi-Khorshidi, Gholamreza; Webster, Matthew; Beckmann, Christian F; Nichols, Thomas E; Ramsey, Joseph D; Woolrich, Mark W

    2011-01-15

    There is great interest in estimating brain "networks" from FMRI data. This is often attempted by identifying a set of functional "nodes" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  18. Application of zone-folding approach to the first-principles estimation of thermodynamic properties of carbon and ZrS2 -based nanotubes.

    PubMed

    Bandura, Andrei V; Porsev, Vitaly V; Evarestov, Robert A

    2016-03-15

    A zone-folding (ZF) approach is applied for the estimation of the phonon contributions to thermodynamic properties of carbon-and ZrS2 -based nanotubes (NTs) of hexagonal morphology with different chiralities. The results obtained are compared with those from the direct calculation of the thermodynamic properties of NTs using PBE0 hybrid exchange-correlation functional. The phonon contribution to the stability of NTs proved to be negligible for the internal energy and small for the Helmholtz free energy. It is found that the ZF approach allows us an accurate estimation of phonon contributions to internal energy, but slightly overestimates the phonon contributions to entropy. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. A new approach to estimating evaporation from lakes and reservoirs based on energy balance and remote sensing data

    NASA Astrophysics Data System (ADS)

    Majidi, Maysam; Sadeghi, Morteza; Shafiei, Mojtaba; Alizadeh, Amin; Farid, Alireza; Azad, Mohammadreza; Vazifedoust, Majid

    2016-04-01

    Estimating evaporation from water bodies such as lakes and reservoirs is commonly a difficult task, especially due to the lack of reliable and available ground data. Remote sensing (RS) data has shown a great potential for filling the gap. Nonetheless, interpretation of the RS data (e.g. optical reflectance, thermal emission, etc.) for estimating water evaporation has remained as a challenge. In this paper, we present a novel approach for estimating water evaporation based on satellite RS data and some readily measurable ground data. In the proposed approach, named as "Reference and Water surface Energy Balance (RWEB)", we define a reference surface and then solve the energy balance equation simultaneously for the reference surfaces and water surface. This approach was tested over the Doosti dam reservoir (north east of Iran) using whether station and RS data as well as water temperature measured biweekly along the study. Accuracy of the RWEB algorithm was examined by comparison to the standard "Bowen Ratio Energy Balance (BREB)" RS algorithm. The RMSD value of 0.047 mm/year indicated a good agreement between RWEB and BREB algorithms, while RWEB provides an easier-to-use approach regarding its required input variables.

  20. Optimal phase estimation with arbitrary a priori knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demkowicz-Dobrzanski, Rafal

    2011-06-15

    The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less

  1. Benchmark dose analysis via nonparametric regression modeling

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057

  2. Wavelet analysis for wind fields estimation.

    PubMed

    Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms.

  3. Cost calculator methods for estimating casework time in child welfare services: A promising approach for use in implementation of evidence-based practices and other service innovations.

    PubMed

    Holmes, Lisa; Landsverk, John; Ward, Harriet; Rolls-Reutz, Jennifer; Saldana, Lisa; Wulczyn, Fred; Chamberlain, Patricia

    2014-04-01

    Estimating costs in child welfare services is critical as new service models are incorporated into routine practice. This paper describes a unit costing estimation system developed in England (cost calculator) together with a pilot test of its utility in the United States where unit costs are routinely available for health services but not for child welfare services. The cost calculator approach uses a unified conceptual model that focuses on eight core child welfare processes. Comparison of these core processes in England and in four counties in the United States suggests that the underlying child welfare processes generated from England were perceived as very similar by child welfare staff in California county systems with some exceptions in the review and legal processes. Overall, the adaptation of the cost calculator for use in the United States child welfare systems appears promising. The paper also compares the cost calculator approach to the workload approach widely used in the United States and concludes that there are distinct differences between the two approaches with some possible advantages to the use of the cost calculator approach, especially in the use of this method for estimating child welfare costs in relation to the incorporation of evidence-based interventions into routine practice.

  4. Event-based estimation of water budget components using the network of multi-sensor capacitance probes

    USDA-ARS?s Scientific Manuscript database

    A time-scale-free approach was developed for estimation of water fluxes at boundaries of monitoring soil profile using water content time series. The approach uses the soil water budget to compute soil water budget components, i.e. surface-water excess (Sw), infiltration less evapotranspiration (I-E...

  5. An Information Retrieval Approach for Robust Prediction of Road Surface States.

    PubMed

    Park, Jae-Hyung; Kim, Kwanho

    2017-01-28

    Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.

  6. An Information Retrieval Approach for Robust Prediction of Road Surface States

    PubMed Central

    Park, Jae-Hyung; Kim, Kwanho

    2017-01-01

    Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859

  7. Wlan-Based Indoor Localization Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Saleem, Fasiha; Wyne, Shurjeel

    2016-07-01

    Wireless indoor localization has generated recent research interest due to its numerous applications. This work investigates Wi-Fi based indoor localization using two variants of the fingerprinting approach. Specifically, we study the application of an artificial neural network (ANN) for implementing the fingerprinting approach and compare its localization performance with a probabilistic fingerprinting method that is based on maximum likelihood estimation (MLE) of the user location. We incorporate spatial correlation of fading into our investigations, which is often neglected in simulation studies and leads to erroneous location estimates. The localization performance is quantified in terms of accuracy, precision, robustness, and complexity. Multiple methods for handling the case of missing APs in online stage are investigated. Our results indicate that ANN-based fingerprinting outperforms the probabilistic approach for all performance metrics considered in this work.

  8. The special case of the 2 × 2 table: asymptotic unconditional McNemar test can be used to estimate sample size even for analysis based on GEE.

    PubMed

    Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu

    2015-07-01

    Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Estimating survival of precocial chicks during the prefledging period using a catch-curve analysis and count-based age-class data

    USGS Publications Warehouse

    McGowan, C.P.; Millspaugh, J.J.; Ryan, M.R.; Kruse, C.D.; Pavelka, G.

    2009-01-01

    Estimating reproductive success for birds with precocial young can be difficult because chicks leave nests soon after hatching and individuals or broods can be difficult to track. Researchers often turn to estimating survival during the prefledging period and, though effective, mark-recapture based approaches are not always feasible due to cost, time, and animal welfare concerns. Using a threatened population of Piping Plovers (Charadrius melodus) that breeds along the Missouri River, we present an approach for estimating chick survival during the prefledging period using long-term (1993-2005), count-based, age-class data. We used a modified catch-curve analysis, and data collected during three 5-day sampling periods near the middle of the breeding season. The approach has several ecological and statistical assumptions and our analyses were designed to minimize the probability of violating those assumptions. For example, limiting the sampling periods to only 5 days gave reasonable assurance that population size was stable during the sampling period. Annual daily survival estimates ranged from 0.825 (SD = 0.03) to 0.931 (0.02) depending on year and sampling period, with these estimates assuming constant survival during the prefledging period and no change in the age structure of the population. The average probability of survival to fledging ranged from 0.126 to 0.188. Our results are similar to other published estimates for this species in similar habitats. This method of estimating chick survival may be useful for a variety of precocial bird species when mark-recapture methods are not feasible and only count-based age class data are available. ?? 2009 Association of Field Ornithologists.

  10. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The non-linear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  11. Model-Based Engine Control Architecture with an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Connolly, Joseph W.

    2016-01-01

    This paper discusses the design and implementation of an extended Kalman filter (EKF) for model-based engine control (MBEC). Previously proposed MBEC architectures feature an optimal tuner Kalman Filter (OTKF) to produce estimates of both unmeasured engine parameters and estimates for the health of the engine. The success of this approach relies on the accuracy of the linear model and the ability of the optimal tuner to update its tuner estimates based on only a few sensors. Advances in computer processing are making it possible to replace the piece-wise linear model, developed off-line, with an on-board nonlinear model running in real-time. This will reduce the estimation errors associated with the linearization process, and is typically referred to as an extended Kalman filter. The nonlinear extended Kalman filter approach is applied to the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k) and compared to the previously proposed MBEC architecture. The results show that the EKF reduces the estimation error, especially during transient operation.

  12. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    NASA Technical Reports Server (NTRS)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  13. Estimate of methane emissions from oil and gas operations in the Uintah Basin using airborne measurements and Lidar wind data

    NASA Astrophysics Data System (ADS)

    Karion, A.; Sweeney, C.; Petron, G.; Frost, G. J.; Trainer, M.; Brewer, A.; Hardesty, R.; Conley, S. A.; Wolter, S.; Newberger, T.; Kofler, J.; Tans, P. P.

    2012-12-01

    During a February 2012 campaign in the Uintah oil and gas basin in northeastern Utah, thirteen research flights were conducted in conjunction with a variety of ground-based measurements. Using aircraft-based high-resolution (0.5 Hz) observations of methane (CH4) and carbon dioxide (CO2), along with High-Resolution Doppler Lidar wind observations from a ground site in the basin, we have calculated the basin-wide CH4 flux on several days. Uncertainty estimates are calculated for each day and are generally large for all but one flight day. On one day, February 3, uncertainty on the estimate from a mass balance approach is better than 30% due to ideal meteorological conditions, including a well-mixed boundary layer and low wind variability both in time and altitude, as determined from the Lidar wind observations. This aircraft-based mass balance approach to flux estimates is a critical and valuable tool for estimating CH4 emissions from oil and gas basins.

  14. A New Approach for Estimating Entrainment Rate in Cumulus Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu C.; Liu, Y.; Yum, S. S.

    2012-02-16

    A new approach is presented to estimate entrainment rate in cumulus clouds. The new approach is directly derived from the definition of fractional entrainment rate and relates it to mixing fraction and the height above cloud base. The results derived from the new approach compare favorably with those obtained with a commonly used approach, and have smaller uncertainty. This new approach has several advantages: it eliminates the need for in-cloud measurements of temperature and water vapor content, which are often problematic in current aircraft observations; it has the potential for straightforwardly connecting the estimation of entrainment rate and the microphysicalmore » effects of entrainment-mixing processes; it also has the potential for developing a remote sensing technique to infer entrainment rate.« less

  15. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Efficient dense blur map estimation for automatic 2D-to-3D conversion

    NASA Astrophysics Data System (ADS)

    Vosters, L. P. J.; de Haan, G.

    2012-03-01

    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvement.

  17. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. A new methodology for estimating nuclear casualties as a function of time.

    PubMed

    Zirkle, Robert A; Walsh, Terri J; Disraelly, Deena S; Curling, Carl A

    2011-09-01

    The Human Response Injury Profile (HRIP) nuclear methodology provides an estimate of casualties occurring as a consequence of nuclear attacks against military targets for planning purposes. The approach develops user-defined, time-based casualty and fatality estimates based on progressions of underlying symptoms and their severity changes over time. This paper provides a description of the HRIP nuclear methodology and its development, including inputs, human response and the casualty estimation process.

  19. A comparison of two approaches to modelling snow cover dynamics at the Polish Polar Station at Hornsund

    NASA Astrophysics Data System (ADS)

    Luks, B.; Osuch, M.; Romanowicz, R. J.

    2012-04-01

    We compare two approaches to modelling snow cover dynamics at the Polish Polar Station at Hornsund. In the first approach we apply physically-based Utah Energy Balance Snow Accumulation and Melt Model (UEB) (Tarboton et al., 1995; Tarboton and Luce, 1996). The model uses a lumped representation of the snowpack with two primary state variables: snow water equivalence and energy. Its main driving inputs are: air temperature, precipitation, wind speed, humidity and radiation (estimated from the diurnal temperature range). Those variables are used for physically-based calculations of radiative, sensible, latent and advective heat exchanges with a 3 hours time step. The second method is an application of a statistically efficient lumped parameter time series approach to modelling the dynamics of snow cover , based on daily meteorological measurements from the same area. A dynamic Stochastic Transfer Function model is developed that follows the Data Based Mechanistic approach, where a stochastic data-based identification of model structure and an estimation of its parameters are followed by a physical interpretation. We focus on the analysis of uncertainty of both model outputs. In the time series approach, the applied techniques also provide estimates of the modeling errors and the uncertainty of the model parameters. In the first, physically-based approach the applied UEB model is deterministic. It assumes that the observations are without errors and that the model structure perfectly describes the processes within the snowpack. To take into account the model and observation errors, we applied a version of the Generalized Likelihood Uncertainty Estimation technique (GLUE). This technique also provide estimates of the modelling errors and the uncertainty of the model parameters. The observed snowpack water equivalent values are compared with those simulated with 95% confidence bounds. This work was supported by National Science Centre of Poland (grant no. 7879/B/P01/2011/40). Tarboton, D. G., T. G. Chowdhury and T. H. Jackson, 1995. A Spatially Distributed Energy Balance Snowmelt Model. In K. A. Tonnessen, M. W. Williams and M. Tranter (Ed.), Proceedings of a Boulder Symposium, July 3-14, IAHS Publ. no. 228, pp. 141-155. Tarboton, D. G. and C. H. Luce, 1996. Utah Energy Balance Snow Accumulation and Melt Model (UEB). Computer model technical description and users guide, Utah Water Research Laboratory and USDA Forest Service Intermountain Research Station (http://www.engineering.usu.edu/dtarb/). 64 pp.

  20. Estimating uncertainty in ambient and saturation nutrient uptake metrics from nutrient pulse releases in stream ecosystems

    DOE PAGES

    Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.

    2016-10-07

    Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less

  1. Estimating uncertainty in ambient and saturation nutrient uptake metrics from nutrient pulse releases in stream ecosystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, Scott C.; Brandt, Craig C.; Griffiths, Natalie A.

    Nutrient spiraling is an important ecosystem process characterizing nutrient transport and uptake in streams. Various nutrient addition methods are used to estimate uptake metrics; however, uncertainty in the metrics is not often evaluated. A method was developed to quantify uncertainty in ambient and saturation nutrient uptake metrics estimated from saturating pulse nutrient additions (Tracer Additions for Spiraling Curve Characterization; TASCC). Using a Monte Carlo (MC) approach, the 95% confidence interval (CI) was estimated for ambient uptake lengths (S w-amb) and maximum areal uptake rates (U max) based on 100,000 datasets generated from each of four nitrogen and five phosphorous TASCCmore » experiments conducted seasonally in a forest stream in eastern Tennessee, U.S.A. Uncertainty estimates from the MC approach were compared to the CIs estimated from ordinary least squares (OLS) and non-linear least squares (NLS) models used to calculate S w-amb and U max, respectively, from the TASCC method. The CIs for Sw-amb and Umax were large, but were not consistently larger using the MC method. Despite the large CIs, significant differences (based on nonoverlapping CIs) in nutrient metrics among seasons were found with more significant differences using the OLS/NLS vs. the MC method. Lastly, we suggest that the MC approach is a robust way to estimate uncertainty, as the calculation of S w-amb and U max violates assumptions of OLS/NLS while the MC approach is free of these assumptions. The MC approach can be applied to other ecosystem metrics that are calculated from multiple parameters, providing a more robust estimate of these metrics and their associated uncertainties.« less

  2. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    NASA Astrophysics Data System (ADS)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  3. Graphical Evaluation of the Ridge-Type Robust Regression Estimators in Mixture Experiments

    PubMed Central

    Erkoc, Ali; Emiroglu, Esra

    2014-01-01

    In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set. PMID:25202738

  4. Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.

    PubMed

    Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas

    2014-01-01

    In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.

  5. Inverse and forward modeling under uncertainty using MRE-based Bayesian approach

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Rubin, Y.

    2004-12-01

    A stochastic inverse approach for subsurface characterization is proposed and applied to shallow vadose zone at a winery field site in north California and to a gas reservoir at the Ormen Lange field site in the North Sea. The approach is formulated in a Bayesian-stochastic framework, whereby the unknown parameters are identified in terms of their statistical moments or their probabilities. Instead of the traditional single-valued estimation /prediction provided by deterministic methods, the approach gives a probability distribution for an unknown parameter. This allows calculating the mean, the mode, and the confidence interval, which is useful for a rational treatment of uncertainty and its consequences. The approach also allows incorporating data of various types and different error levels, including measurements of state variables as well as information such as bounds on or statistical moments of the unknown parameters, which may represent prior information. To obtain minimally subjective prior probabilities required for the Bayesian approach, the principle of Minimum Relative Entropy (MRE) is employed. The approach is tested in field sites for flow parameters identification and soil moisture estimation in the vadose zone and for gas saturation estimation at great depth below the ocean floor. Results indicate the potential of coupling various types of field data within a MRE-based Bayesian formalism for improving the estimation of the parameters of interest.

  6. Agreement and repeatability of vascular reactivity estimates based on a breath-hold task and a resting state scan.

    PubMed

    Lipp, Ilona; Murphy, Kevin; Caseras, Xavier; Wise, Richard G

    2015-06-01

    FMRI BOLD responses to changes in neural activity are influenced by the reactivity of the vasculature. By complementing a task-related BOLD acquisition with a vascular reactivity measure obtained through breath-holding or hypercapnia, this unwanted variance can be statistically reduced in the BOLD responses of interest. Recently, it has been suggested that vascular reactivity can also be estimated using a resting state scan. This study aimed to compare three breath-hold based analysis approaches (block design, sine-cosine regressor and CO2 regressor) and a resting state approach (CO2 regressor) to measure vascular reactivity. We tested BOLD variance explained by the model and repeatability of the measures. Fifteen healthy participants underwent a breath-hold task and a resting state scan with end-tidal CO2 being recorded during both. Vascular reactivity was defined as CO2-related BOLD percent signal change/mmHg change in CO2. Maps and regional vascular reactivity estimates showed high repeatability when the breath-hold task was used. Repeatability and variance explained by the CO2 trace regressor were lower for the resting state data based approach, which resulted in highly variable measures of vascular reactivity. We conclude that breath-hold based vascular reactivity estimations are more repeatable than resting-based estimates, and that there are limitations with replacing breath-hold scans by resting state scans for vascular reactivity assessment. Copyright © 2015. Published by Elsevier Inc.

  7. Agreement and repeatability of vascular reactivity estimates based on a breath-hold task and a resting state scan

    PubMed Central

    Lipp, Ilona; Murphy, Kevin; Caseras, Xavier; Wise, Richard G.

    2015-01-01

    FMRI BOLD responses to changes in neural activity are influenced by the reactivity of the vasculature. By complementing a task-related BOLD acquisition with a vascular reactivity measure obtained through breath-holding or hypercapnia, this unwanted variance can be statistically reduced in the BOLD responses of interest. Recently, it has been suggested that vascular reactivity can also be estimated using a resting state scan. This study aimed to compare three breath-hold based analysis approaches (block design, sine–cosine regressor and CO2 regressor) and a resting state approach (CO2 regressor) to measure vascular reactivity. We tested BOLD variance explained by the model and repeatability of the measures. Fifteen healthy participants underwent a breath-hold task and a resting state scan with end-tidal CO2 being recorded during both. Vascular reactivity was defined as CO2-related BOLD percent signal change/mm Hg change in CO2. Maps and regional vascular reactivity estimates showed high repeatability when the breath-hold task was used. Repeatability and variance explained by the CO2 trace regressor were lower for the resting state data based approach, which resulted in highly variable measures of vascular reactivity. We conclude that breath-hold based vascular reactivity estimations are more repeatable than resting-based estimates, and that there are limitations with replacing breath-hold scans by resting state scans for vascular reactivity assessment. PMID:25795342

  8. Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach

    PubMed Central

    Girrbach, Fabian; Hol, Jeroen D.; Bellusci, Giovanni; Diehl, Moritz

    2017-01-01

    The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem. PMID:28534857

  9. Digital Biomass Accumulation Using High-Throughput Plant Phenotype Data Analysis.

    PubMed

    Rahaman, Md Matiur; Ahsan, Md Asif; Gillani, Zeeshan; Chen, Ming

    2017-09-01

    Biomass is an important phenotypic trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive, and they require numerous individuals to be cultivated for repeated measurements. With the advent of image-based high-throughput plant phenotyping facilities, non-destructive biomass measuring methods have attempted to overcome this problem. Thus, the estimation of plant biomass of individual plants from their digital images is becoming more important. In this paper, we propose an approach to biomass estimation based on image derived phenotypic traits. Several image-based biomass studies state that the estimation of plant biomass is only a linear function of the projected plant area in images. However, we modeled the plant volume as a function of plant area, plant compactness, and plant age to generalize the linear biomass model. The obtained results confirm the proposed model and can explain most of the observed variance during image-derived biomass estimation. Moreover, a small difference was observed between actual and estimated digital biomass, which indicates that our proposed approach can be used to estimate digital biomass accurately.

  10. Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach.

    PubMed

    Girrbach, Fabian; Hol, Jeroen D; Bellusci, Giovanni; Diehl, Moritz

    2017-05-19

    The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem.

  11. Probabilistic estimation of residential air exchange rates for population-based human exposure modeling

    EPA Science Inventory

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...

  12. Measuring reproductive health: review of community-based approaches to assessing morbidity.

    PubMed Central

    Sadana, R.

    2000-01-01

    This article begins by reviewing selected past approaches to estimating the prevalence of a range of morbidities through the use of household or community-based interview surveys in developed and developing countries. Subsequently, it reviews epidemiological studies that have used a range of methods to estimate the prevalence of reproductive morbidities. A detailed review of recent community or hospital based health interview validation studies that compare self-reported, clinical and laboratory measures is presented. Studies from Bangladesh, Bolivia, China, Egypt, India, Indonesia, Nigeria, Philippines and Turkey provide empirical evidence that self-reported morbidity and observed morbidity measure different phenomena and therefore different aspects of reproductive health and illness. Rather than estimating the prevalence of morbidity, interview-based surveys may provide useful information about the disability or burden associated with reproductive health and illness. PMID:10859858

  13. How the 2SLS/IV estimator can handle equality constraints in structural equation models: a system-of-equations approach.

    PubMed

    Nestler, Steffen

    2014-05-01

    Parameters in structural equation models are typically estimated using the maximum likelihood (ML) approach. Bollen (1996) proposed an alternative non-iterative, equation-by-equation estimator that uses instrumental variables. Although this two-stage least squares/instrumental variables (2SLS/IV) estimator has good statistical properties, one problem with its application is that parameter equality constraints cannot be imposed. This paper presents a mathematical solution to this problem that is based on an extension of the 2SLS/IV approach to a system of equations. We present an example in which our approach was used to examine strong longitudinal measurement invariance. We also investigated the new approach in a simulation study that compared it with ML in the examination of the equality of two latent regression coefficients and strong measurement invariance. Overall, the results show that the suggested approach is a useful extension of the original 2SLS/IV estimator and allows for the effective handling of equality constraints in structural equation models. © 2013 The British Psychological Society.

  14. Comparison of gestational dating methods and implications for exposure-outcome associations: an example with PM2.5 and preterm birth

    EPA Science Inventory

    OBJECTIVES: Estimating gestational age is usually based on date of last menstrual period (LMP) or clinical estimation (CE); both approaches introduce potential bias. Differences in methods of estimation may lead to misclassificat ion and inconsistencies in risk estimates, particu...

  15. A stochastic approach to online vehicle state and parameter estimation, with application to inertia estimation for rollover prevention and battery charge/health estimation.

    DOT National Transportation Integrated Search

    2013-08-01

    This report summarizes research conducted at Penn State, Virginia Tech, and West Virginia University on the development of algorithms based on the generalized polynomial chaos (gpc) expansion for the online estimation of automotive and transportation...

  16. How does spatial and temporal resolution of vegetation index impact crop yield estimation?

    USDA-ARS?s Scientific Manuscript database

    Timely and accurate estimation of crop yield before harvest is critical for food market and administrative planning. Remote sensing data have long been used in crop yield estimation for decades. The process-based approach uses light use efficiency model to estimate crop yield. Vegetation index (VI) ...

  17. A weight modification sequential method for VSC-MTDC power system state estimation

    NASA Astrophysics Data System (ADS)

    Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng

    2017-06-01

    This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.

  18. Pros, Cons, and Alternatives to Weight Based Cost Estimating

    NASA Technical Reports Server (NTRS)

    Joyner, Claude R.; Lauriem, Jonathan R.; Levack, Daniel H.; Zapata, Edgar

    2011-01-01

    Many cost estimating tools use weight as a major parameter in projecting the cost. This is often combined with modifying factors such as complexity, technical maturity of design, environment of operation, etc. to increase the fidelity of the estimate. For a set of conceptual designs, all meeting the same requirements, increased weight can be a major driver in increased cost. However, once a design is fixed, increased weight generally decreases cost, while decreased weight generally increases cost - and the relationship is not linear. Alternative approaches to estimating cost without using weight (except perhaps for materials costs) have been attempted to try to produce a tool usable throughout the design process - from concept studies through development. This paper will address the pros and cons of using weight based models for cost estimating, using liquid rocket engines as the example. It will then examine approaches that minimize the impct of weight based cost estimating. The Rocket Engine- Cost Model (RECM) is an attribute based model developed internally by Pratt & Whitney Rocketdyne for NASA. RECM will be presented primarily to show a successful method to use design and programmatic parameters instead of weight to estimate both design and development costs and production costs. An operations model developed by KSC, the Launch and Landing Effects Ground Operations model (LLEGO), will also be discussed.

  19. Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning

    DTIC Science & Technology

    2008-01-01

    active learning framework for SVM-based and boosting-based rank learning. Our approach suggests sampling based on maximizing the estimated loss differential over unlabeled data. Experimental results on two benchmark corpora show that the proposed model substantially reduces the labeling effort, and achieves superior performance rapidly with as much as 30% relative improvement over the margin-based sampling

  20. Quantifying uncertainties in streamflow predictions through signature based inference of hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro

    2016-04-01

    The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.

  1. Deciphering the enigma of undetected species, phylogenetic, and functional diversity based on Good-Turing theory.

    PubMed

    Chao, Anne; Chiu, Chun-Huo; Colwell, Robert K; Magnago, Luiz Fernando S; Chazdon, Robin L; Gotelli, Nicholas J

    2017-11-01

    Estimating the species, phylogenetic, and functional diversity of a community is challenging because rare species are often undetected, even with intensive sampling. The Good-Turing frequency formula, originally developed for cryptography, estimates in an ecological context the true frequencies of rare species in a single assemblage based on an incomplete sample of individuals. Until now, this formula has never been used to estimate undetected species, phylogenetic, and functional diversity. Here, we first generalize the Good-Turing formula to incomplete sampling of two assemblages. The original formula and its two-assemblage generalization provide a novel and unified approach to notation, terminology, and estimation of undetected biological diversity. For species richness, the Good-Turing framework offers an intuitive way to derive the non-parametric estimators of the undetected species richness in a single assemblage, and of the undetected species shared between two assemblages. For phylogenetic diversity, the unified approach leads to an estimator of the undetected Faith's phylogenetic diversity (PD, the total length of undetected branches of a phylogenetic tree connecting all species), as well as a new estimator of undetected PD shared between two phylogenetic trees. For functional diversity based on species traits, the unified approach yields a new estimator of undetected Walker et al.'s functional attribute diversity (FAD, the total species-pairwise functional distance) in a single assemblage, as well as a new estimator of undetected FAD shared between two assemblages. Although some of the resulting estimators have been previously published (but derived with traditional mathematical inequalities), all taxonomic, phylogenetic, and functional diversity estimators are now derived under the same framework. All the derived estimators are theoretically lower bounds of the corresponding undetected diversities; our approach reveals the sufficient conditions under which the estimators are nearly unbiased, thus offering new insights. Simulation results are reported to numerically verify the performance of the derived estimators. We illustrate all estimators and assess their sampling uncertainty with an empirical dataset for Brazilian rain forest trees. These estimators should be widely applicable to many current problems in ecology, such as the effects of climate change on spatial and temporal beta diversity and the contribution of trait diversity to ecosystem multi-functionality. © 2017 by the Ecological Society of America.

  2. Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System

    PubMed Central

    Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei

    2018-01-01

    The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751

  3. Using river distance and existing hydrography data can improve the geostatistical estimation of fish tissue mercury at unsampled locations.

    PubMed

    Money, Eric S; Sackett, Dana K; Aday, D Derek; Serre, Marc L

    2011-09-15

    Mercury in fish tissue is a major human health concern. Consumption of mercury-contaminated fish poses risks to the general population, including potentially serious developmental defects and neurological damage in young children. Therefore, it is important to accurately identify areas that have the potential for high levels of bioaccumulated mercury. However, due to time and resource constraints, it is difficult to adequately assess fish tissue mercury on a basin wide scale. We hypothesized that, given the nature of fish movement along streams, an analytical approach that takes into account distance traveled along these streams would improve the estimation accuracy for fish tissue mercury in unsampled streams. Therefore, we used a river-based Bayesian Maximum Entropy framework (river-BME) for modern space/time geostatistics to estimate fish tissue mercury at unsampled locations in the Cape Fear and Lumber Basins in eastern North Carolina. We also compared the space/time geostatistical estimation using river-BME to the more traditional Euclidean-based BME approach, with and without the inclusion of a secondary variable. Results showed that this river-based approach reduced the estimation error of fish tissue mercury by more than 13% and that the median estimate of fish tissue mercury exceeded the EPA action level of 0.3 ppm in more than 90% of river miles for the study domain.

  4. An evaluation of risk estimation procedures for mixtures of carcinogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, J.S.; Chen, J.J.

    1999-12-01

    The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on anmore » underlying assumption of the normality for the distributions of individual risk estimates. IN this paper the authors evaluated the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some of all individual upper confidence limit estimates are conservative or anti-conservative.« less

  5. The application of mean field theory to image motion estimation.

    PubMed

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  6. Estimating Agricultural Nitrous Oxide Emissions

    USDA-ARS?s Scientific Manuscript database

    Nitrous oxide emissions are highly variable in space and time and different methodologies have not agreed closely, especially at small scales. However, as scale increases, so does the agreement between estimates based on soil surface measurements (bottom up approach) and estimates derived from chang...

  7. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Astrophysics Data System (ADS)

    Guo, T. H.; Musgrave, J.

    1992-11-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.

  8. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Guo, T. H.; Musgrave, J.

    1992-01-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.

  9. Angular velocity estimation from measurement vectors of star tracker.

    PubMed

    Liu, Hai-bo; Yang, Jun-cai; Yi, Wen-jun; Wang, Jiong-qi; Yang, Jian-kun; Li, Xiu-jian; Tan, Ji-chun

    2012-06-01

    In most spacecraft, there is a need to know the craft's angular rate. Approaches with least squares and an adaptive Kalman filter are proposed for estimating the angular rate directly from the star tracker measurements. In these approaches, only knowledge of the vector measurements and sampling interval is required. The designed adaptive Kalman filter can filter out noise without information of the dynamic model and inertia dyadic. To verify the proposed estimation approaches, simulations based on the orbit data of the challenging minisatellite payload (CHAMP) satellite and experimental tests with night-sky observation are performed. Both the simulations and experimental testing results have demonstrated that the proposed approach performs well in terms of accuracy, robustness, and performance.

  10. Inter-Individual Variability in High-Throughput Risk ...

    EPA Pesticide Factsheets

    We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have little or no existing TK data. Chemicals are prioritized based on model estimates of hazard and exposure, to decide which chemicals should be first in line for further study. Hazard may be estimated with in vitro HT screening assays, e.g., U.S. EPA’s ToxCast program. Bioactive ToxCast concentrations can be extrapolated to doses that produce equivalent concentrations in body tissues using a reverse TK approach in which generic TK models are parameterized with 1) chemical-specific parameters derived from in vitro measurements and predicted from chemical structure; and 2) with physiological parameters for a virtual population. Here we draw physiological parameters from realistic estimates of distributions of demographic and anthropometric quantities in the modern U.S. population, based on the most recent CDC NHANES data. A Monte Carlo approach, accounting for the correlation structure in physiological parameters, is used to estimate ToxCast equivalent doses for the most sensitive portion of the population. To quantify risk, ToxCast equivalent doses are compared to estimates of exposure rates based on Bayesian inferences drawn from NHANES urinary analyte biomonitoring data. The inclusion

  11. Estimating time-based instantaneous total mortality rate based on the age-structured abundance index

    NASA Astrophysics Data System (ADS)

    Wang, Yingbin; Jiao, Yan

    2015-05-01

    The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.

  12. Three Methods of Estimating a Model of Group Effects: A Comparison with Reference to School Effect Studies.

    ERIC Educational Resources Information Center

    Igra, Amnon

    1980-01-01

    Three methods of estimating a model of school effects are compared: ordinary least squares; an approach based on the analysis of covariance; and, a residualized input-output approach. Results are presented using a matrix algebra formulation, and advantages of the first two methods are considered. (Author/GK)

  13. A Robust Approach to Risk Assessment Based on Species Sensitivity Distributions.

    PubMed

    Monti, Gianna S; Filzmoser, Peter; Deutsch, Roland C

    2018-05-03

    The guidelines for setting environmental quality standards are increasingly based on probabilistic risk assessment due to a growing general awareness of the need for probabilistic procedures. One of the commonly used tools in probabilistic risk assessment is the species sensitivity distribution (SSD), which represents the proportion of species affected belonging to a biological assemblage as a function of exposure to a specific toxicant. Our focus is on the inverse use of the SSD curve with the aim of estimating the concentration, HCp, of a toxic compound that is hazardous to p% of the biological community under study. Toward this end, we propose the use of robust statistical methods in order to take into account the presence of outliers or apparent skew in the data, which may occur without any ecological basis. A robust approach exploits the full neighborhood of a parametric model, enabling the analyst to account for the typical real-world deviations from ideal models. We examine two classic HCp estimation approaches and consider robust versions of these estimators. In addition, we also use data transformations in conjunction with robust estimation methods in case of heteroscedasticity. Different scenarios using real data sets as well as simulated data are presented in order to illustrate and compare the proposed approaches. These scenarios illustrate that the use of robust estimation methods enhances HCp estimation. © 2018 Society for Risk Analysis.

  14. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    PubMed

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  15. Proxies for soil organic carbon derived from remote sensing

    NASA Astrophysics Data System (ADS)

    Rasel, S. M. M.; Groen, T. A.; Hussin, Y. A.; Diti, I. J.

    2017-07-01

    The possibility of carbon storage in soils is of interest because compared to vegetation it contains more carbon. Estimation of soil carbon through remote sensing based techniques can be a cost effective approach, but is limited by available methods. This study aims to develop a model based on remotely sensed variables (elevation, forest type and above ground biomass) to estimate soil carbon stocks. Field observations on soil organic carbon, species composition, and above ground biomass were recorded in the subtropical forest of Chitwan, Nepal. These variables were also estimated using LiDAR data and a WorldView 2 image. Above ground biomass was estimated from the LiDAR image using a novel approach where the image was segmented to identify individual trees, and for these trees estimates of DBH and Height were made. Based on AIC (Akaike Information Criterion) a regression model with above ground biomass derived from LiDAR data, and forest type derived from WorldView 2 imagery was selected to estimate soil organic carbon (SOC) stocks. The selected model had a coefficient of determination (R2) of 0.69. This shows the scope of estimating SOC with remote sensing derived variables in sub-tropical forests.

  16. Similar negative impacts of temperature on global wheat yield estimated by three independent methods

    USDA-ARS?s Scientific Manuscript database

    The potential impact of global temperature change on global wheat production has recently been assessed with different methods, scaling and aggregation approaches. Here we show that grid-based simulations, point-based simulations, and statistical regressions produce similar estimates of temperature ...

  17. Reference interval estimation: Methodological comparison using extensive simulations and empirical data.

    PubMed

    Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S

    2017-12-01

    To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  18. Derived Optimal Linear Combination Evapotranspiration (DOLCE): a global gridded synthesis ET estimate

    NASA Astrophysics Data System (ADS)

    Hobeichi, Sanaa; Abramowitz, Gab; Evans, Jason; Ukkola, Anna

    2018-02-01

    Accurate global gridded estimates of evapotranspiration (ET) are key to understanding water and energy budgets, in addition to being required for model evaluation. Several gridded ET products have already been developed which differ in their data requirements, the approaches used to derive them and their estimates, yet it is not clear which provides the most reliable estimates. This paper presents a new global ET dataset and associated uncertainty with monthly temporal resolution for 2000-2009. Six existing gridded ET products are combined using a weighting approach trained by observational datasets from 159 FLUXNET sites. The weighting method is based on a technique that provides an analytically optimal linear combination of ET products compared to site data and accounts for both the performance differences and error covariance between the participating ET products. We examine the performance of the weighting approach in several in-sample and out-of-sample tests that confirm that point-based estimates of flux towers provide information on the grid scale of these products. We also provide evidence that the weighted product performs better than its six constituent ET product members in four common metrics. Uncertainty in the ET estimate is derived by rescaling the spread of participating ET products so that their spread reflects the ability of the weighted mean estimate to match flux tower data. While issues in observational data and any common biases in participating ET datasets are limitations to the success of this approach, future datasets can easily be incorporated and enhance the derived product.

  19. Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma

    NASA Astrophysics Data System (ADS)

    Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira

    2013-02-01

    A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.

  20. New estimation architecture for multisensor data fusion

    NASA Astrophysics Data System (ADS)

    Covino, Joseph M.; Griffiths, Barry E.

    1991-07-01

    This paper describes a novel method of hierarchical asynchronous distributed filtering called the Net Information Approach (NIA). The NIA is a Kalman-filter-based estimation scheme for spatially distributed sensors which must retain their local optimality yet require a nearly optimal global estimate. The key idea of the NIA is that each local sensor-dedicated filter tells the global filter 'what I've learned since the last local-to-global transmission,' whereas in other estimation architectures the local-to-global transmission consists of 'what I think now.' An algorithm based on this idea has been demonstrated on a small-scale target-tracking problem with many encouraging results. Feasibility of this approach was demonstrated by comparing NIA performance to an optimal centralized Kalman filter (lower bound) via Monte Carlo simulations.

  1. A Comparison of the Approaches of Generalizability Theory and Item Response Theory in Estimating the Reliability of Test Scores for Testlet-Composed Tests

    ERIC Educational Resources Information Center

    Lee, Guemin; Park, In-Yong

    2012-01-01

    Previous assessments of the reliability of test scores for testlet-composed tests have indicated that item-based estimation methods overestimate reliability. This study was designed to address issues related to the extent to which item-based estimation methods overestimate the reliability of test scores composed of testlets and to compare several…

  2. Practical state of health estimation of power batteries based on Delphi method and grey relational grade analysis

    NASA Astrophysics Data System (ADS)

    Sun, Bingxiang; Jiang, Jiuchun; Zheng, Fangdan; Zhao, Wei; Liaw, Bor Yann; Ruan, Haijun; Han, Zhiqiang; Zhang, Weige

    2015-05-01

    The state of health (SOH) estimation is very critical to battery management system to ensure the safety and reliability of EV battery operation. Here, we used a unique hybrid approach to enable complex SOH estimations. The approach hybridizes the Delphi method known for its simplicity and effectiveness in applying weighting factors for complicated decision-making and the grey relational grade analysis (GRGA) for multi-factor optimization. Six critical factors were used in the consideration for SOH estimation: peak power at 30% state-of-charge (SOC), capacity, the voltage drop at 30% SOC with a C/3 pulse, the temperature rises at the end of discharge and charge at 1C; respectively, and the open circuit voltage at the end of charge after 1-h rest. The weighting of these factors for SOH estimation was scored by the 'experts' in the Delphi method, indicating the influencing power of each factor on SOH. The parameters for these factors expressing the battery state variations are optimized by GRGA. Eight battery cells were used to illustrate the principle and methodology to estimate the SOH by this hybrid approach, and the results were compared with those based on capacity and power capability. The contrast among different SOH estimations is discussed.

  3. Accounting for estimated IQ in neuropsychological test performance with regression-based techniques.

    PubMed

    Testa, S Marc; Winicki, Jessica M; Pearlson, Godfrey D; Gordon, Barry; Schretlen, David J

    2009-11-01

    Regression-based normative techniques account for variability in test performance associated with multiple predictor variables and generate expected scores based on algebraic equations. Using this approach, we show that estimated IQ, based on oral word reading, accounts for 1-9% of the variability beyond that explained by individual differences in age, sex, race, and years of education for most cognitive measures. These results confirm that adding estimated "premorbid" IQ to demographic predictors in multiple regression models can incrementally improve the accuracy with which regression-based norms (RBNs) benchmark expected neuropsychological test performance in healthy adults. It remains to be seen whether the incremental variance in test performance explained by estimated "premorbid" IQ translates to improved diagnostic accuracy in patient samples. We describe these methods, and illustrate the step-by-step application of RBNs with two cases. We also discuss the rationale, assumptions, and caveats of this approach. More broadly, we note that adjusting test scores for age and other characteristics might actually decrease the accuracy with which test performance predicts absolute criteria, such as the ability to drive or live independently.

  4. Wavelet Analysis for Wind Fields Estimation

    PubMed Central

    Leite, Gladeston C.; Ushizima, Daniela M.; Medeiros, Fátima N. S.; de Lima, Gilson G.

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B3 spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms−1. Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699

  5. Colorectal Cancer Deaths Attributable to Nonuse of Screening in the United States

    PubMed Central

    Meester, Reinier G.S.; Doubeni, Chyke A.; Lansdorp-Vogelaar, Iris; Goede, S.L.; Levin, Theodore R.; Quinn, Virginia P.; van Ballegooijen, Marjolein; Corley, Douglas A.; Zauber, Ann G.

    2015-01-01

    Purpose Screening is a major contributor to colorectal cancer (CRC) mortality reductions in the U.S., but is underutilized. We estimated the fraction of CRC deaths attributable to nonuse of screening to demonstrate the potential benefits from targeted interventions. Methods The established MISCAN-colon microsimulation model was used to estimate the population attributable fraction (PAF) in people aged ≥50 years. The model incorporates long-term patterns and effects of screening by age and type of screening test. PAF for 2010 was estimated using currently available data on screening uptake; PAF was also projected assuming constant future screening rates to incorporate lagged effects from past increases in screening uptake. We also computed PAF using Levin's formula to gauge how this simpler approach differs from the model-based approach. Results There were an estimated 51,500 CRC deaths in 2010, about 63% (N∼32,200) of which were attributable to non-screening. The PAF decreases slightly to 58% in 2020. Levin's approach yielded a considerably more conservative PAF of 46% (N∼23,600) for 2010. Conclusions The majority of current U.S. CRC deaths are attributable to non-screening. This underscores the potential benefits of increasing screening uptake in the population. Traditional methods of estimating PAF underestimated screening effects compared with model-based approaches. PMID:25721748

  6. Can metric-based approaches really improve multi-model climate projections? A perfect model framework applied to summer temperature change in France.

    NASA Astrophysics Data System (ADS)

    Boé, Julien; Terray, Laurent

    2014-05-01

    Ensemble approaches for climate change projections have become ubiquitous. Because of large model-to-model variations and, generally, lack of rationale for the choice of a particular climate model against others, it is widely accepted that future climate change and its impacts should not be estimated based on a single climate model. Generally, as a default approach, the multi-model ensemble mean (MMEM) is considered to provide the best estimate of climate change signals. The MMEM approach is based on the implicit hypothesis that all the models provide equally credible projections of future climate change. This hypothesis is unlikely to be true and ideally one would want to give more weight to more realistic models. A major issue with this alternative approach lies in the assessment of the relative credibility of future climate projections from different climate models, as they can only be evaluated against present-day observations: which present-day metric(s) should be used to decide which models are "good" and which models are "bad" in the future climate? Once a supposedly informative metric has been found, other issues arise. What is the best statistical method to combine multiple models results taking into account their relative credibility measured by a given metric? How to be sure in the end that the metric-based estimate of future climate change is not in fact less realistic than the MMEM? It is impossible to provide strict answers to those questions in the climate change context. Yet, in this presentation, we propose a methodological approach based on a perfect model framework that could bring some useful elements of answer to the questions previously mentioned. The basic idea is to take a random climate model in the ensemble and treat it as if it were the truth (results of this model, in both past and future climate, are called "synthetic observations"). Then, all the other members from the multi-model ensemble are used to derive thanks to a metric-based approach a posterior estimate of climate change, based on the synthetic observation of the metric. Finally, it is possible to compare the posterior estimate to the synthetic observation of future climate change to evaluate the skill of the method. The main objective of this presentation is to describe and apply this perfect model framework to test different methodological issues associated with non-uniform model weighting and similar metric-based approaches. The methodology presented is general, but will be applied to the specific case of summer temperature change in France, for which previous works have suggested potentially useful metrics associated with soil-atmosphere and cloud-temperature interactions. The relative performances of different simple statistical approaches to combine multiple model results based on metrics will be tested. The impact of ensemble size, observational errors, internal variability, and model similarity will be characterized. The potential improvements associated with metric-based approaches compared to the MMEM is terms of errors and uncertainties will be quantified.

  7. Recreational Stream Crossing Effects on Sediment Delivery and Macroinvertebrates in Southwestern Virginia, USA

    NASA Astrophysics Data System (ADS)

    Kidd, Kathryn R.; Aust, W. Michael; Copenheaver, Carolyn A.

    2014-09-01

    Trail-based recreation has increased over recent decades, raising the environmental management issue of soil erosion that originates from unsurfaced, recreational trail systems. Trail-based soil erosion that occurs near stream crossings represents a non-point source of pollution to streams. We modeled soil erosion rates along multiple-use (hiking, mountain biking, and horseback riding) recreational trails that approach culvert and ford stream crossings as potential sources of sediment input and evaluated whether recreational stream crossings were impacting water quality based on downstream changes in macroinvertebrate-based indices within the Poverty Creek Trail System of the George Washington and Jefferson National Forest in southwestern Virginia, USA. We found modeled soil erosion rates for non-motorized recreational approaches that were lower than published estimates for an off-road vehicle approach, bare horse trails, and bare forest operational skid trail and road approaches, but were 13 times greater than estimated rates for undisturbed forests and 2.4 times greater than a 2-year old clearcut in this region. Estimated soil erosion rates were similar to rates for skid trails and horse trails where best management practices (BMPs) had been implemented. Downstream changes in macroinvertebrate-based indices indicated water quality was lower downstream from crossings than in upstream reference reaches. Our modeled soil erosion rates illustrate recreational stream crossing approaches have the potential to deliver sediment into adjacent streams, particularly where BMPs are not being implemented or where approaches are not properly managed, and as a result can negatively impact water quality below stream crossings.

  8. An index approach to performance-based payments for water quality.

    PubMed

    Maille, Peter; Collins, Alan R

    2012-05-30

    In this paper we describe elements of a field research project that presented farmers with economic incentives to control nitrate runoff. The approach used is novel in that payments are based on ambient water quality and water quantity produced by a watershed rather than proxies for water quality conservation. Also, payments are made based on water quality relative to a control watershed, and therefore, account for stochastic fluctuations in background nitrate levels. Finally, the program pays farmers as a group to elicit team behavior. We present our approach to modeling that allowed us to estimate prices for water and resulting payment levels. We then compare these preliminary estimates to the actual values recorded over 33 months of fieldwork. We find that our actual payments were 29% less than our preliminary estimates, due in part to the failure of our ecological model to estimate discharge accurately. Despite this shortfall, the program attracted the participation of 53% of the farmers in the watershed, and resulted in substantial nitrate abatement activity. Given this favorable response, we propose that research efforts focus on implementing field trials of group-level performance-based payments. Ideally these programs would be low risk and control for naturally occurring contamination. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Using groundwater levels to estimate recharge

    USGS Publications Warehouse

    Healy, R.W.; Cook, P.G.

    2002-01-01

    Accurate estimation of groundwater recharge is extremely important for proper management of groundwater systems. Many different approaches exist for estimating recharge. This paper presents a review of methods that are based on groundwater-level data. The water-table fluctuation method may be the most widely used technique for estimating recharge; it requires knowledge of specific yield and changes in water levels over time. Advantages of this approach include its simplicity and an insensitivity to the mechanism by which water moves through the unsaturated zone. Uncertainty in estimates generated by this method relate to the limited accuracy with which specific yield can be determined and to the extent to which assumptions inherent in the method are valid. Other methods that use water levels (mostly based on the Darcy equation) are also described. The theory underlying the methods is explained. Examples from the literature are used to illustrate applications of the different methods.

  10. A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Byun, K.; Hamlet, A. F.

    2017-12-01

    There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.

  11. Characterization of the planetary boundary layer height and structure by Raman lidar: comparison of different approaches

    NASA Astrophysics Data System (ADS)

    Summa, D.; Di Girolamo, P.; Stelitano, D.; Cacciani, M.

    2013-12-01

    The planetary boundary layer (PBL) includes the portion of the atmosphere which is directly influenced by the presence of the earth's surface. Aerosol particles trapped within the PBL can be used as tracers to study the boundary-layer vertical structure and time variability. As a result of this, elastic backscatter signals collected by lidar systems can be used to determine the height and the internal structure of the PBL. The present analysis considers three different methods to estimate the PBL height. The first method is based on the determination of the first-order derivative of the logarithm of the range-corrected elastic lidar signals. Estimates of the PBL height for specific case studies obtained through this approach are compared with simultaneous estimates from the potential temperature profiles measured by radiosondes launched simultaneously to lidar operation. Additional estimates of the boundary layer height are based on the determination of the first-order derivative of the range-corrected rotational Raman lidar signals. This latter approach results to be successfully applicable also in the afternoon-evening decaying phase of the PBL, when the effectiveness of the approach based on the elastic lidar signals may be compromised or altered by the presence of the residual layer. Results from these different approaches are compared and discussed in the paper, with a specific focus on selected case studies collected by the University of Basilicata Raman lidar system BASIL during the Convective and Orographically-induced Precipitation Study (COPS).

  12. Characterization of the planetary boundary layer height and structure by Raman lidar: comparison of different approaches

    NASA Astrophysics Data System (ADS)

    Summa, D.; Di Girolamo, P.; Stelitano, D.; Cacciani, M.

    2013-06-01

    The Planetary Boundary Layer (PBL) includes the portion of the atmosphere which is directly influenced by the presence of the Earth's surface. Aerosol particles trapped within the PBL can be used as tracers to study the boundary-layer vertical structure and time variability. As a result of this, elastic backscatter signals collected by lidar systems can be used to determine the height and the internal structure of the PBL. The present analysis considers three different methods to estimate the PBL height. A first method is based on the determination of the first order derivative of the logarithm of the range-corrected elastic lidar signals. Estimates of the PBL height for specific case studies obtained from this approach are compared with simultaneous estimates from the potential temperature profiles measured by radiosondes launched simultaneously to lidar operation. Additional estimates of the boundary layer height are based on the determination of the first order derivative of the range-corrected rotational Raman lidar signals. This latter approach results to be successfully applicable also in the afternoon-evening decaying phase of the PBL, when the effectiveness of the approach based on the elastic lidar signals may be compromised or altered by the presence of the residual layer. Results from these different approaches are compared and discussed in the paper, with a specific focus on selected case studies collected by the University of Basilicata Raman lidar system BASIL during the Convective and Orographically-induced Precipitation Study (COPS).

  13. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    PubMed

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Reconciling Basin-Scale Top-Down and Bottom-Up Methane Emission Measurements for Onshore Oil and Gas Development: Cooperative Research and Development Final Report, CRADA Number CRD-14-572

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heath, Garvin A.

    The overall objective of the Research Partnership to Secure Energy for America (RPSEA)-funded research project is to develop independent estimates of methane emissions using top-down and bottom-up measurement approaches and then to compare the estimates, including consideration of uncertainty. Such approaches will be applied at two scales: basin and facility. At facility scale, multiple methods will be used to measure methane emissions of the whole facility (controlled dual tracer and single tracer releases, aircraft-based mass balance and Gaussian back-trajectory), which are considered top-down approaches. The bottom-up approach will sum emissions from identified point sources measured using appropriate source-level measurement techniquesmore » (e.g., high-flow meters). At basin scale, the top-down estimate will come from boundary layer airborne measurements upwind and downwind of the basin, using a regional mass balance model plus approaches to separate atmospheric methane emissions attributed to the oil and gas sector. The bottom-up estimate will result from statistical modeling (also known as scaling up) of measurements made at selected facilities, with gaps filled through measurements and other estimates based on other studies. The relative comparison of the bottom-up and top-down estimates made at both scales will help improve understanding of the accuracy of the tested measurement and modeling approaches. The subject of this CRADA is NREL's contribution to the overall project. This project resulted from winning a competitive solicitation no. RPSEA RFP2012UN001, proposal no. 12122-95, which is the basis for the overall project. This Joint Work Statement (JWS) details the contributions of NREL and Colorado School of Mines (CSM) in performance of the CRADA effort.« less

  15. Estimation of gestational age in early pregnancy from crown-rump length when gestational age range is truncated: the case study of the INTERGROWTH-21st Project

    PubMed Central

    2013-01-01

    Background Fetal ultrasound scanning is considered vital for routine antenatal care with first trimester scans recommended for accurate estimation of gestational age (GA). A reliable estimate of gestational age is key information underpinning clinical care and allows estimation of expected date of delivery. Fetal crown-rump length (CRL) is recommended over last menstrual period for estimating GA when measured in early pregnancy i.e. 9+0-13+6 weeks. Methods The INTERGROWTH-21st Project is the largest prospective study to collect data on CRL in geographically diverse populations and with a high level of quality control measures in place. We aim to develop a new gestational age estimation equation based on the crown-rump length (CRL) from women recruited between 9+0-13+6 weeks. The main statistical challenge is modelling data when the outcome variable (GA) is truncated at both ends, i.e. at 9 and 14 weeks. We explored three alternative statistical approaches to overcome the truncation of GA. To evaluate these strategies we generated a data set with no truncation of GA that was similar to the INTERGROWTH-21st Project CRL data, which we used to explore the performance of different methods of analysis of these data when we imposed truncation at 9 and 14 weeks of gestation. These 3 methods were first tested in a simulation based study using a previously published dating equation by Verburg et al. and evaluated how well each of them performed in relation to the model from which the data were generated. After evaluating the 3 approaches using simulated data based on the Verburg equations, the best approach will be applied to the INTERGROWTH-21st Project data to estimate GA from CRL. Results Results of these rather “ad hoc” statistical methods correspond very closely to the “real data” for Verburg, a data set that is similar to the INTERGROWTH-21st project CRL data set. Conclusions We are confident that we can use these approaches to get reliable estimates based on INTERGROWTH-21st Project CRL data. These approaches may be a solution to other truncation problems involving similar data though their application to other settings would need to be evaluated. PMID:24314232

  16. Combining the Hanning windowed interpolated FFT in both directions

    NASA Astrophysics Data System (ADS)

    Chen, Kui Fu; Li, Yan Feng

    2008-06-01

    The interpolated fast Fourier transform (IFFT) has been proposed as a way to eliminate the picket fence effect (PFE) of the fast Fourier transform. The modulus based IFFT, cited in most relevant references, makes use of only the 1st and 2nd highest spectral lines. An approach using three principal spectral lines is proposed. This new approach combines both directions of the complex spectrum based IFFT with the Hanning window. The optimal weight to minimize the estimation variance is established on the first order Taylor series expansion of noise interference. A numerical simulation is carried out, and the results are compared with the Cramer-Rao bound. It is demonstrated that the proposed approach has a lower estimation variance than the two-spectral-line approach. The improvement depends on the extent of sampling deviating from the coherent condition, and the best is decreasing variance by 2/7. However, it is also shown that the estimation variance of the windowed IFFT with the Hanning is significantly higher than that of without windowing.

  17. Increasing efficiency of CO2 uptake by combined land-ocean sink

    NASA Astrophysics Data System (ADS)

    van Marle, M.; van Wees, D.; Houghton, R. A.; Nassikas, A.; van der Werf, G.

    2017-12-01

    Carbon-climate feedbacks are one of the key uncertainties in predicting future climate change. Such a feedback could originate from carbon sinks losing their efficiency, for example due to saturation of the CO2 fertilization effect or ocean warming. An indirect approach to estimate how the combined land and ocean sink responds to climate change and growing fossil fuel emissions is based on assessing the trends in the airborne fraction of CO2 emissions from fossil fuel and land use change. One key limitation with this approach has been the large uncertainty in quantifying land use change emissions. We have re-assessed those emissions in a more data-driven approach by combining estimates coming from a bookkeeping model with visibility-based land use change emissions available for the Arc of Deforestation and Equatorial Asia, two key regions with large land use change emissions. The advantage of the visibility-based dataset is that the emissions are observation-based and this dataset provides more detailed information about interannual variability than previous estimates. Based on our estimates we provide evidence that land use and land cover change emissions have increased more rapidly than previously thought, implying that the airborne fraction has decreased since the start of CO2 measurements in 1959. This finding is surprising because it means that the combined land and ocean sink has become more efficient while the opposite is expected.

  18. Application of the Combination Approach for Estimating Evapotranspiration in Puerto Rico

    NASA Technical Reports Server (NTRS)

    Harmsen, Eric; Luvall, Jeffrey; Gonzalez, Jorge

    2005-01-01

    The ability to estimate short-term fluxes of water vapor from the land surface is important for validating latent heat flux estimates from high resolution remote sensing techniques. A new, relatively inexpensive method is presented for estimating t h e ground-based values of the surface latent heat flux or evapotranspiration.

  19. Using a physiologically based pharmacokinetic model to link urinary biomarker concentrations to dietary exposure of perchlorate

    EPA Science Inventory

    Exposure to perchlorate is widespread in the United States and many studies have attempted to character the perchlorate exposure by estimating the average daily intakes of perchlorate. These approaches provided population-based estimates, but did not provide individual-level exp...

  20. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  1. Leveraging population admixture to characterize the heritability of complex traits.

    PubMed

    Zaitlen, Noah; Pasaniuc, Bogdan; Sankararaman, Sriram; Bhatia, Gaurav; Zhang, Jianqi; Gusev, Alexander; Young, Taylor; Tandon, Arti; Pollack, Samuela; Vilhjálmsson, Bjarni J; Assimes, Themistocles L; Berndt, Sonja I; Blot, William J; Chanock, Stephen; Franceschini, Nora; Goodman, Phyllis G; He, Jing; Hennis, Anselm J M; Hsing, Ann; Ingles, Sue A; Isaacs, William; Kittles, Rick A; Klein, Eric A; Lange, Leslie A; Nemesure, Barbara; Patterson, Nick; Reich, David; Rybicki, Benjamin A; Stanford, Janet L; Stevens, Victoria L; Strom, Sara S; Whitsel, Eric A; Witte, John S; Xu, Jianfeng; Haiman, Christopher; Wilson, James G; Kooperberg, Charles; Stram, Daniel; Reiner, Alex P; Tang, Hua; Price, Alkes L

    2014-12-01

    Despite recent progress on estimating the heritability explained by genotyped SNPs (h(2)g), a large gap between h(2)g and estimates of total narrow-sense heritability (h(2)) remains. Explanations for this gap include rare variants or upward bias in family-based estimates of h(2) due to shared environment or epistasis. We estimate h(2) from unrelated individuals in admixed populations by first estimating the heritability explained by local ancestry (h(2)γ). We show that h(2)γ = 2FSTCθ(1 - θ)h(2), where FSTC measures frequency differences between populations at causal loci and θ is the genome-wide ancestry proportion. Our approach is not susceptible to biases caused by epistasis or shared environment. We applied this approach to the analysis of 13 phenotypes in 21,497 African-American individuals from 3 cohorts. For height and body mass index (BMI), we obtained h(2) estimates of 0.55 ± 0.09 and 0.23 ± 0.06, respectively, which are larger than estimates of h(2)g in these and other data but smaller than family-based estimates of h(2).

  2. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  3. Frequency Estimator Performance for a Software-Based Beacon Receiver

    NASA Technical Reports Server (NTRS)

    Zemba, Michael J.; Morse, Jacquelynne Rose; Nessel, James A.; Miranda, Felix

    2014-01-01

    As propagation terminals have evolved, their design has trended more toward a software-based approach that facilitates convenient adjustment and customization of the receiver algorithms. One potential improvement is the implementation of a frequency estimation algorithm, through which the primary frequency component of the received signal can be estimated with a much greater resolution than with a simple peak search of the FFT spectrum. To select an estimator for usage in a QV-band beacon receiver, analysis of six frequency estimators was conducted to characterize their effectiveness as they relate to beacon receiver design.

  4. Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.

    PubMed

    Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong

    2014-09-01

    A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.

  5. Shear Elasticity and Shear Viscosity Imaging in Soft Tissue

    NASA Astrophysics Data System (ADS)

    Yang, Yiqun

    In this thesis, a new approach is introduced that provides estimates of shear elasticity and shear viscosity using time-domain measurements of shear waves in viscoelastic media. Simulations of shear wave particle displacements induced by an acoustic radiation force are accelerated significantly by a GPU. The acoustic radiation force is first calculated using the fast near field method (FNM) and the angular spectrum approach (ASA). The shear waves induced by the acoustic radiation force are then simulated in elastic and viscoelastic media using Green's functions. A parallel algorithm is developed to perform these calculations on a GPU, where the shear wave particle displacements at different observation points are calculated in parallel. The resulting speed increase enables rapid evaluation of shear waves at discrete points, in 2D planes, and for push beams with different spatial samplings and for different values of the f-number (f/#). The results of these simulations show that push beams with smaller f/# require a higher spatial sampling rate. The significant amount of acceleration achieved by this approach suggests that shear wave simulations with the Green's function approach are ideally suited for high-performance GPUs. Shear wave elasticity imaging determines the mechanical parameters of soft tissue by analyzing measured shear waves induced by an acoustic radiation force. To estimate the shear elasticity value, the widely used time-of-flight method calculates the correlation between shear wave particle velocities at adjacent lateral observation points. Although this method provides accurate estimates of the shear elasticity in purely elastic media, our experience suggests that the time-of-flight (TOF) method consistently overestimates the shear elasticity values in viscoelastic media because the combined effects of diffraction, attenuation, and dispersion are not considered. To address this problem, we have developed an approach that directly accounts for all of these effects when estimating the shear elasticity. This new approach simulates shear wave particle velocities using a Green's function-based approach for the Voigt model, where the shear elasticity and viscosity values are estimated using an optimization-based approach that compares measured shear wave particle velocities with simulated shear wave particle velocities in the time-domain. The results are evaluated on a point-by-point basis to generate images. There is good agreement between the simulated and measured shear wave particle velocities, where the new approach yields much better images of the shear elasticity and shear viscosity than the TOF method. The new estimation approach is accelerated with an approximate viscoelastic Green's function model that is evaluated with shear wave data obtained from in vivo human livers. Instead of calculating shear waves with combinations of different shear elasticities and shear viscosities, shear waves are calculated with different shear elasticities on the GPU and then convolved with a viscous loss model, which accelerates the calculation dramatically. The shear elasticity and shear viscosity values are then estimated using an optimization-based approach by minimizing the difference between measured and simulated shear wave particle velocities. Shear elasticity and shear viscosity images are generated at every spatial point in a two-dimensional (2D) field-of-view (FOV). The new approach is applied to measured shear wave data obtained from in vivo human livers, and the results show that this new approach successfully generates shear elasticity and shear viscosity images from this data. The results also indicate that the shear elasticity values estimated with this approach are significantly smaller than the values estimated with the conventional TOF method and that the new approach demonstrates more consistent values for these estimates compared with the TOF method. This experience suggests that the new method is an effective approach for estimating the shear elasticity and the shear viscosity in liver and in other soft tissue.

  6. Frequency domain surface EMG sensor fusion for estimating finger forces.

    PubMed

    Potluri, Chandrasekhar; Kumar, Parmod; Anugolu, Madhavi; Urfer, Alex; Chiu, Steve; Naidu, D; Schoen, Marco P

    2010-01-01

    Extracting or estimating skeletal hand/finger forces using surface electro myographic (sEMG) signals poses many challenges due to cross-talk, noise, and a temporal and spatially modulated signal characteristics. Normal sEMG measurements are based on single sensor data. In this paper, array sensors are used along with a proposed sensor fusion scheme that result in a simple Multi-Input-Single-Output (MISO) transfer function. Experimental data is used along with system identification to find this MISO system. A Genetic Algorithm (GA) approach is employed to optimize the characteristics of the MISO system. The proposed fusion-based approach is tested experimentally and indicates improvement in finger/hand force estimation.

  7. Measurement of acoustic velocity components in a turbulent flow using LDV and high-repetition rate PIV

    NASA Astrophysics Data System (ADS)

    Léon, Olivier; Piot, Estelle; Sebbane, Delphine; Simon, Frank

    2017-06-01

    The present study provides theoretical details and experimental validation results to the approach proposed by Minotti et al. (Aerosp Sci Technol 12(5):398-407, 2008) for measuring amplitudes and phases of acoustic velocity components (AVC) that are waveform parameters of each component of velocity induced by an acoustic wave, in fully turbulent duct flows carrying multi-tone acoustic waves. Theoretical results support that the turbulence rejection method proposed, based on the estimation of cross power spectra between velocity measurements and a reference signal such as a wall pressure measurement, provides asymptotically efficient estimators with respect to the number of samples. Furthermore, it is shown that the estimator uncertainties can be simply estimated, accounting for the characteristics of the measured flow turbulence spectra. Two laser-based measurement campaigns were conducted in order to validate the acoustic velocity estimation approach and the uncertainty estimates derived. While in previous studies estimates were obtained using laser Doppler velocimetry (LDV), it is demonstrated that high-repetition rate particle image velocimetry (PIV) can also be successfully employed. The two measurement techniques provide very similar acoustic velocity amplitude and phase estimates for the cases investigated, that are of practical interest for acoustic liner studies. In a broader sense, this approach may be beneficial for non-intrusive sound emission studies in wind tunnel testings.

  8. Calibrating random forests for probability estimation.

    PubMed

    Dankowski, Theresa; Ziegler, Andreas

    2016-09-30

    Probabilities can be consistently estimated using random forests. It is, however, unclear how random forests should be updated to make predictions for other centers or at different time points. In this work, we present two approaches for updating random forests for probability estimation. The first method has been proposed by Elkan and may be used for updating any machine learning approach yielding consistent probabilities, so-called probability machines. The second approach is a new strategy specifically developed for random forests. Using the terminal nodes, which represent conditional probabilities, the random forest is first translated to logistic regression models. These are, in turn, used for re-calibration. The two updating strategies were compared in a simulation study and are illustrated with data from the German Stroke Study Collaboration. In most simulation scenarios, both methods led to similar improvements. In the simulation scenario in which the stricter assumptions of Elkan's method were not met, the logistic regression-based re-calibration approach for random forests outperformed Elkan's method. It also performed better on the stroke data than Elkan's method. The strength of Elkan's method is its general applicability to any probability machine. However, if the strict assumptions underlying this approach are not met, the logistic regression-based approach is preferable for updating random forests for probability estimation. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. A simple approach for estimating the refractive index structure parameter (Cn²) profile in the atmosphere.

    PubMed

    Basu, Sukanta

    2015-09-01

    Utilizing the so-called Thorpe scale as a measure of the turbulence outer scale, we propose a physically-based approach for the estimation of Cn2 profiles in the lower atmosphere. This approach only requires coarse-resolution temperature profiles (a.k.a., soundings) as input, yet it has the intrinsic ability to capture layers of high optical turbulence. The prowess of this computationally inexpensive approach is demonstrated by validations against observational data from a field campaign over Mauna Kea, Hawaii.

  10. Updating estimates of low streamflow statistics to account for possible trends

    NASA Astrophysics Data System (ADS)

    Blum, A. G.; Archfield, S. A.; Hirsch, R. M.; Vogel, R. M.; Kiang, J. E.; Dudley, R. W.

    2017-12-01

    Given evidence of both increasing and decreasing trends in low flows in many streams, methods are needed to update estimators of low flow statistics used in water resources management. One such metric is the 10-year annual low-flow statistic (7Q10) calculated as the annual minimum seven-day streamflow which is exceeded in nine out of ten years on average. Historical streamflow records may not be representative of current conditions at a site if environmental conditions are changing. We present a new approach to frequency estimation under nonstationary conditions that applies a stationary nonparametric quantile estimator to a subset of the annual minimum flow record. Monte Carlo simulation experiments were used to evaluate this approach across a range of trend and no trend scenarios. Relative to the standard practice of using the entire available streamflow record, use of a nonparametric quantile estimator combined with selection of the most recent 30 or 50 years for 7Q10 estimation were found to improve accuracy and reduce bias. Benefits of data subset selection approaches were greater for higher magnitude trends annual minimum flow records with lower coefficients of variation. A nonparametric trend test approach for subset selection did not significantly improve upon always selecting the last 30 years of record. At 174 stream gages in the Chesapeake Bay region, 7Q10 estimators based on the most recent 30 years of flow record were compared to estimators based on the entire period of record. Given the availability of long records of low streamflow, using only a subset of the flow record ( 30 years) can be used to update 7Q10 estimators to better reflect current streamflow conditions.

  11. Estimating evaporative vapor generation from automobiles based on parking activities.

    PubMed

    Dong, Xinyi; Tschantz, Michael; Fu, Joshua S

    2015-07-01

    A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  13. DETECTORS AND EXPERIMENTAL METHODS: Heuristic approach for peak regions estimation in gamma-ray spectra measured by a NaI detector

    NASA Astrophysics Data System (ADS)

    Zhu, Meng-Hua; Liu, Liang-Gang; You, Zhong; Xu, Ao-Ao

    2009-03-01

    In this paper, a heuristic approach based on Slavic's peak searching method has been employed to estimate the width of peak regions for background removing. Synthetic and experimental data are used to test this method. With the estimated peak regions using the proposed method in the whole spectrum, we find it is simple and effective enough to be used together with the Statistics-sensitive Nonlinear Iterative Peak-Clipping method.

  14. Analysis of stock investment selection based on CAPM using covariance and genetic algorithm approach

    NASA Astrophysics Data System (ADS)

    Sukono; Susanti, D.; Najmia, M.; Lesmana, E.; Napitupulu, H.; Supian, S.; Putra, A. S.

    2018-03-01

    Investment is one of the economic growth factors of countries, especially in Indonesia. Stocks is a form of investment, which is liquid. In determining the stock investment decisions which need to be considered by investors is to choose stocks that can generate maximum returns with a minimum risk level. Therefore, we need to know how to allocate the capital which may give the optimal benefit. This study discusses the issue of stock investment based on CAPM which is estimated using covariance and Genetic Algorithm approach. It is assumed that the stocks analyzed follow the CAPM model. To do the estimation of beta parameter on CAPM equation is done by two approach, first is to be represented by covariance approach, and second with genetic algorithm optimization. As a numerical illustration, in this paper analyzed ten stocks traded on the capital market in Indonesia. The results of the analysis show that estimation of beta parameters using covariance and genetic algorithm approach, give the same decision, that is, six underpriced stocks with buying decision, and four overpriced stocks with a sales decision. Based on the analysis, it can be concluded that the results can be used as a consideration for investors buying six under-priced stocks, and selling four overpriced stocks.

  15. Variational optical flow estimation based on stick tensor voting.

    PubMed

    Rashwan, Hatem A; Garcia, Miguel A; Puig, Domenec

    2013-07-01

    Variational optical flow techniques allow the estimation of flow fields from spatio-temporal derivatives. They are based on minimizing a functional that contains a data term and a regularization term. Recently, numerous approaches have been presented for improving the accuracy of the estimated flow fields. Among them, tensor voting has been shown to be particularly effective in the preservation of flow discontinuities. This paper presents an adaptation of the data term by using anisotropic stick tensor voting in order to gain robustness against noise and outliers with significantly lower computational cost than (full) tensor voting. In addition, an anisotropic complementary smoothness term depending on directional information estimated through stick tensor voting is utilized in order to preserve discontinuity capabilities of the estimated flow fields. Finally, a weighted non-local term that depends on both the estimated directional information and the occlusion state of pixels is integrated during the optimization process in order to denoise the final flow field. The proposed approach yields state-of-the-art results on the Middlebury benchmark.

  16. The Use of Radar-Based Products for Deriving Extreme Rainfall Frequencies Using Regional Frequency Analysis with Application in South Louisiana

    NASA Astrophysics Data System (ADS)

    Eldardiry, H. A.; Habib, E. H.

    2014-12-01

    Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the index flood technique are used in the RFA. A bootstrap technique procedure is carried out to account for the uncertainty in the distribution parameters to construct 90% confidence intervals (i.e., 5% and 95% confidence limits) on AMS-based precipitation frequency curves.

  17. Satellite estimation of incident photosynthetically active radiation using ultraviolet reflectance

    NASA Technical Reports Server (NTRS)

    Eck, Thomas F.; Dye, Dennis G.

    1991-01-01

    A new satellite remote sensing method for estimating the amount of photosynthetically active radiation (PAR, 400-700 nm) incident at the earth's surface is described and tested. Potential incident PAR for clear sky conditions is computed from an existing spectral model. A major advantage of the UV approach over existing visible band approaches to estimating insolation is the improved ability to discriminate clouds from high-albedo background surfaces. UV spectral reflectance data from the Total Ozone Mapping Spectrometer (TOMS) were used to test the approach for three climatically distinct, midlatitude locations. Estimates of monthly total incident PAR from the satellite technique differed from values computed from ground-based pyranometer measurements by less than 6 percent. This UV remote sensing method can be applied to estimate PAR insolation over ocean and land surfaces which are free of ice and snow.

  18. Comparison of methods for estimating bird abundance and trends from historical count data

    Treesearch

    Frank R. Thompson; Frank A. La Sorte

    2008-01-01

    The use of bird counts as indices has come under increasing scrutiny because assumptions concerning detection probabilities may not be met, but there also seems to be some resistance to use of model-based approaches to estimating abundance. We used data from the United States Forest Service, Southern Region bird monitoring program to compare several common approaches...

  19. A Hybrid Method for Opinion Finding Task (KUNLP at TREC 2008 Blog Track)

    DTIC Science & Technology

    2008-11-01

    retrieve relevant documents. For the Opinion Retrieval subtask, we propose a hybrid model of lexicon-based approach and machine learning approach for...estimating and ranking the opinionated documents. For the Polarized Opinion Retrieval subtask, we employ machine learning for predicting the polarity...and linear combination technique for ranking polar documents. The hybrid model which utilize both lexicon-based approach and machine learning approach

  20. Error analysis of satellite attitude determination using a vision-based approach

    NASA Astrophysics Data System (ADS)

    Carozza, Ludovico; Bevilacqua, Alessandro

    2013-09-01

    Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).

  1. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    DOE PAGES

    Higdon, Dave; McDonnell, Jordan D.; Schunck, Nicolas; ...

    2015-02-05

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based modelmore » $$\\eta (\\theta )$$, where θ denotes the uncertain, best input setting. Hence the statistical model is of the form $$y=\\eta (\\theta )+\\epsilon ,$$ where $$\\epsilon $$ accounts for measurement, and possibly other, error sources. When nonlinearity is present in $$\\eta (\\cdot )$$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model $$\\eta (\\cdot )$$. This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. Lastly, we also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory.« less

  2. Fast and robust estimation of ophthalmic wavefront aberrations

    NASA Astrophysics Data System (ADS)

    Dillon, Keith

    2016-12-01

    Rapidly rising levels of myopia, particularly in the developing world, have led to an increased need for inexpensive and automated approaches to optometry. A simple and robust technique is provided for estimating major ophthalmic aberrations using a gradient-based wavefront sensor. The approach is based on the use of numerical calculations to produce diverse combinations of phase components, followed by Fourier transforms to calculate the coefficients. The approach does not utilize phase unwrapping nor iterative solution of inverse problems. This makes the method very fast and tolerant to image artifacts, which do not need to be detected and masked or interpolated as is needed in other techniques. These features make it a promising algorithm on which to base low-cost devices for applications that may have limited access to expert maintenance and operation.

  3. Probability- and model-based approaches to inference for proportion forest using satellite imagery as ancillary data

    Treesearch

    Ronald E. McRoberts

    2010-01-01

    Estimates of forest area are among the most common and useful information provided by national forest inventories. The estimates are used for local and national purposes and for reporting to international agreements such as the Montréal Process, the Ministerial Conference on the Protection of Forests in Europe, and the Kyoto Protocol. The estimates are usually based on...

  4. Toward Quantitative Estimation of the Effect of Aerosol Particles in the Global Climate Model and Cloud Resolving Model

    NASA Astrophysics Data System (ADS)

    Eskes, H.; Boersma, F.; Dirksen, R.; van der A, R.; Veefkind, P.; Levelt, P.; Brinksma, E.; van Roozendael, M.; de Smedt, I.; Gleason, J.

    2005-05-01

    Based on measurements of GOME on ESA ERS-2, SCIAMACHY on ESA-ENVISAT, and Ozone Monitoring Instrument (OMI) on the NASA EOS-Aura satellite there is now a unique 11-year dataset of global tropospheric nitrogen dioxide measurements from space. The retrieval approach consists of two steps. The first step is an application of the DOAS (Differential Optical Absorption Spectroscopy) approach which delivers the total absorption optical thickness along the light path (the slant column). For GOME and SCIAMACHY this is based on the DOAS implementation developed by BIRA/IASB. For OMI the DOAS implementation was developed in a collaboration between KNMI and NASA. The second retrieval step, developed at KNMI, estimates the tropospheric vertical column of NO2 based on the slant column, cloud fraction and cloud top height retrieval, stratospheric column estimates derived from a data assimilation approach and vertical profile estimates from space-time collocated profiles from the TM chemistry-transport model. The second step was applied with only minor modifications to all three instruments to generate a uniform 11-year data set. In our talk we will address the following topics: - A short summary of the retrieval approach and results - Comparisons with other retrievals - Comparisons with global and regional-scale models - OMI-SCIAMACHY and SCIAMACHY-GOME comparisons - Validation with independent measurements - Trend studies of NO2 for the past 11 years

  5. A constraint-based evolutionary learning approach to the expectation maximization for optimal estimation of the hidden Markov model for speech signal modeling.

    PubMed

    Huda, Shamsul; Yearwood, John; Togneri, Roberto

    2009-02-01

    This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).

  6. A framework for multivariate data-based at-site flood frequency analysis: Essentiality of the conjugal application of parametric and nonparametric approaches

    NASA Astrophysics Data System (ADS)

    Vittal, H.; Singh, Jitendra; Kumar, Pankaj; Karmakar, Subhankar

    2015-06-01

    In watershed management, flood frequency analysis (FFA) is performed to quantify the risk of flooding at different spatial locations and also to provide guidelines for determining the design periods of flood control structures. The traditional FFA was extensively performed by considering univariate scenario for both at-site and regional estimation of return periods. However, due to inherent mutual dependence of the flood variables or characteristics [i.e., peak flow (P), flood volume (V) and flood duration (D), which are random in nature], analysis has been further extended to multivariate scenario, with some restrictive assumptions. To overcome the assumption of same family of marginal density function for all flood variables, the concept of copula has been introduced. Although, the advancement from univariate to multivariate analyses drew formidable attention to the FFA research community, the basic limitation was that the analyses were performed with the implementation of only parametric family of distributions. The aim of the current study is to emphasize the importance of nonparametric approaches in the field of multivariate FFA; however, the nonparametric distribution may not always be a good-fit and capable of replacing well-implemented multivariate parametric and multivariate copula-based applications. Nevertheless, the potential of obtaining best-fit using nonparametric distributions might be improved because such distributions reproduce the sample's characteristics, resulting in more accurate estimations of the multivariate return period. Hence, the current study shows the importance of conjugating multivariate nonparametric approach with multivariate parametric and copula-based approaches, thereby results in a comprehensive framework for complete at-site FFA. Although the proposed framework is designed for at-site FFA, this approach can also be applied to regional FFA because regional estimations ideally include at-site estimations. The framework is based on the following steps: (i) comprehensive trend analysis to assess nonstationarity in the observed data; (ii) selection of the best-fit univariate marginal distribution with a comprehensive set of parametric and nonparametric distributions for the flood variables; (iii) multivariate frequency analyses with parametric, copula-based and nonparametric approaches; and (iv) estimation of joint and various conditional return periods. The proposed framework for frequency analysis is demonstrated using 110 years of observed data from Allegheny River at Salamanca, New York, USA. The results show that for both univariate and multivariate cases, the nonparametric Gaussian kernel provides the best estimate. Further, we perform FFA for twenty major rivers over continental USA, which shows for seven rivers, all the flood variables followed nonparametric Gaussian kernel; whereas for other rivers, parametric distributions provide the best-fit either for one or two flood variables. Thus the summary of results shows that the nonparametric method cannot substitute the parametric and copula-based approaches, but should be considered during any at-site FFA to provide the broadest choices for best estimation of the flood return periods.

  7. Estimation of truck volumes and flows

    DOT National Transportation Integrated Search

    2004-08-01

    This research presents a statistical approach for estimating truck volumes, based : primarily on classification counts and information on roadway functionality, employment, : sales volume and number of establishments within the state. Models have bee...

  8. An optical flow-based method for velocity field of fluid flow estimation

    NASA Astrophysics Data System (ADS)

    Głomb, Grzegorz; Świrniak, Grzegorz; Mroczka, Janusz

    2017-06-01

    The aim of this paper is to present a method for estimating flow-velocity vector fields using the Lucas-Kanade algorithm. The optical flow measurements are based on the Particle Image Velocimetry (PIV) technique, which is commonly used in fluid mechanics laboratories in both research institutes and industry. Common approaches for an optical characterization of velocity fields base on computation of partial derivatives of the image intensity using finite differences. Nevertheless, the accuracy of velocity field computations is low due to the fact that an exact estimation of spatial derivatives is very difficult in presence of rapid intensity changes in the PIV images, caused by particles having small diameters. The method discussed in this paper solves this problem by interpolating the PIV images using Gaussian radial basis functions. This provides a significant improvement in the accuracy of the velocity estimation but, more importantly, allows for the evaluation of the derivatives in intermediate points between pixels. Numerical analysis proves that the method is able to estimate even a separate vector for each particle with a 5× 5 px2 window, whereas a classical correlation-based method needs at least 4 particle images. With the use of a specialized multi-step hybrid approach to data analysis the method improves the estimation of the particle displacement far above 1 px.

  9. A log-linear model approach to estimation of population size using the line-transect sampling method

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1978-01-01

    The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.

  10. Carrying Position Independent User Heading Estimation for Indoor Pedestrian Navigation with Smartphones

    PubMed Central

    Deng, Zhi-An; Wang, Guofeng; Hu, Ying; Cui, Yang

    2016-01-01

    This paper proposes a novel heading estimation approach for indoor pedestrian navigation using the built-in inertial sensors on a smartphone. Unlike previous approaches constraining the carrying position of a smartphone on the user’s body, our approach gives the user a larger freedom by implementing automatic recognition of the device carrying position and subsequent selection of an optimal strategy for heading estimation. We firstly predetermine the motion state by a decision tree using an accelerometer and a barometer. Then, to enable accurate and computational lightweight carrying position recognition, we combine a position classifier with a novel position transition detection algorithm, which may also be used to avoid the confusion between position transition and user turn during pedestrian walking. For a device placed in the trouser pockets or held in a swinging hand, the heading estimation is achieved by deploying a principal component analysis (PCA)-based approach. For a device held in the hand or against the ear during a phone call, user heading is directly estimated by adding the yaw angle of the device to the related heading offset. Experimental results show that our approach can automatically detect carrying positions with high accuracy, and outperforms previous heading estimation approaches in terms of accuracy and applicability. PMID:27187391

  11. Suspension parameter estimation in the frequency domain using a matrix inversion approach

    NASA Astrophysics Data System (ADS)

    Thite, A. N.; Banvidi, S.; Ibicek, T.; Bennett, L.

    2011-12-01

    The dynamic lumped parameter models used to optimise the ride and handling of a vehicle require base values of the suspension parameters. These parameters are generally experimentally identified. The accuracy of identified parameters can depend on the measurement noise and the validity of the model used. The existing publications on suspension parameter identification are generally based on the time domain and use a limited degree of freedom. Further, the data used are either from a simulated 'experiment' or from a laboratory test on an idealised quarter or a half-car model. In this paper, a method is developed in the frequency domain which effectively accounts for the measurement noise. Additional dynamic constraining equations are incorporated and the proposed formulation results in a matrix inversion approach. The nonlinearities in damping are estimated, however, using a time-domain approach. Full-scale 4-post rig test data of a vehicle are used. The variations in the results are discussed using the modal resonant behaviour. Further, a method is implemented to show how the results can be improved when the matrix inverted is ill-conditioned. The case study shows a good agreement between the estimates based on the proposed frequency-domain approach and measurable physical parameters.

  12. Estimating Prevalence of Overweight or Obese Children and Adolescents in Small Geographic Areas Using Publicly Available Data

    PubMed Central

    Davila-Payan, Carlo; DeGuzman, Michael; Johnson, Kevin; Serban, Nicoleta

    2015-01-01

    Introduction Interventions for pediatric obesity can be geographically targeted if high-risk populations can be identified. We developed an approach to estimate the percentage of overweight or obese children aged 2 to 17 years in small geographic areas using publicly available data. We piloted our approach for Georgia. Methods We created a logistic regression model to estimate the individual probability of high body mass index (BMI), given data on the characteristics of the survey participants. We combined the regression model with a simulation to sample subpopulations and obtain prevalence estimates. The models used information from the 2001–2010 National Health and Nutrition Examination Survey, the 2010 Census, and the 2010 American Community Survey. We validated our results by comparing 1) estimates for adults in Georgia produced by using our approach with estimates from the Centers for Disease Control and Prevention (CDC) and 2) estimates for children in Arkansas produced by using our approach with school examination data. We generated prevalence estimates for census tracts in Georgia and prioritized areas for interventions. Results In DeKalb County, the mean prevalence among census tracts varied from 27% to 40%. For adults, the median difference between our estimates and CDC estimates was 1.3 percentage points; for Arkansas children, the median difference between our estimates and examination-based estimates data was 1.7 percentage points. Conclusion Prevalence estimates for census tracts can be different from estimates for the county, so small-area estimates are crucial for designing effective interventions. Our approach validates well against external data, and it can be a relevant aid for planning local interventions for children. PMID:25764138

  13. Three-dimensional Stochastic Estimation of Porosity Distribution: Benefits of Using Ground-penetrating Radar Velocity Tomograms in Simulated-annealing-based or Bayesian Sequential Simulation Approaches

    DTIC Science & Technology

    2012-05-30

    annealing-based or Bayesian sequential simulation approaches B. Dafflon1,2 and W. Barrash1 Received 13 May 2011; revised 12 March 2012; accepted 17 April 2012...the withheld porosity log are also withheld for this estimation process. For both cases we do this for two wells having locally variable stratigraphy ...borehole location is given at the bottom of each log comparison panel. For comparison with stratigraphy at the BHRS, contacts between Units 1 to 4

  14. Linear mixed model for heritability estimation that explicitly addresses environmental variation.

    PubMed

    Heckerman, David; Gurdasani, Deepti; Kadie, Carl; Pomilla, Cristina; Carstensen, Tommy; Martin, Hilary; Ekoru, Kenneth; Nsubuga, Rebecca N; Ssenyomo, Gerald; Kamali, Anatoli; Kaleebu, Pontiano; Widmer, Christian; Sandhu, Manjinder S

    2016-07-05

    The linear mixed model (LMM) is now routinely used to estimate heritability. Unfortunately, as we demonstrate, LMM estimates of heritability can be inflated when using a standard model. To help reduce this inflation, we used a more general LMM with two random effects-one based on genomic variants and one based on easily measured spatial location as a proxy for environmental effects. We investigated this approach with simulated data and with data from a Uganda cohort of 4,778 individuals for 34 phenotypes including anthropometric indices, blood factors, glycemic control, blood pressure, lipid tests, and liver function tests. For the genomic random effect, we used identity-by-descent estimates from accurately phased genome-wide data. For the environmental random effect, we constructed a covariance matrix based on a Gaussian radial basis function. Across the simulated and Ugandan data, narrow-sense heritability estimates were lower using the more general model. Thus, our approach addresses, in part, the issue of "missing heritability" in the sense that much of the heritability previously thought to be missing was fictional. Software is available at https://github.com/MicrosoftGenomics/FaST-LMM.

  15. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    PubMed Central

    Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi

    2016-01-01

    Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003

  16. Comparing estimates of climate change impacts from process-based and statistical crop models

    NASA Astrophysics Data System (ADS)

    Lobell, David B.; Asseng, Senthold

    2017-01-01

    The potential impacts of climate change on crop productivity are of widespread interest to those concerned with addressing climate change and improving global food security. Two common approaches to assess these impacts are process-based simulation models, which attempt to represent key dynamic processes affecting crop yields, and statistical models, which estimate functional relationships between historical observations of weather and yields. Examples of both approaches are increasingly found in the scientific literature, although often published in different disciplinary journals. Here we compare published sensitivities to changes in temperature, precipitation, carbon dioxide (CO2), and ozone from each approach for the subset of crops, locations, and climate scenarios for which both have been applied. Despite a common perception that statistical models are more pessimistic, we find no systematic differences between the predicted sensitivities to warming from process-based and statistical models up to +2 °C, with limited evidence at higher levels of warming. For precipitation, there are many reasons why estimates could be expected to differ, but few estimates exist to develop robust comparisons, and precipitation changes are rarely the dominant factor for predicting impacts given the prominent role of temperature, CO2, and ozone changes. A common difference between process-based and statistical studies is that the former tend to include the effects of CO2 increases that accompany warming, whereas statistical models typically do not. Major needs moving forward include incorporating CO2 effects into statistical studies, improving both approaches’ treatment of ozone, and increasing the use of both methods within the same study. At the same time, those who fund or use crop model projections should understand that in the short-term, both approaches when done well are likely to provide similar estimates of warming impacts, with statistical models generally requiring fewer resources to produce robust estimates, especially when applied to crops beyond the major grains.

  17. Reparametrization-based estimation of genetic parameters in multi-trait animal model using Integrated Nested Laplace Approximation.

    PubMed

    Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J

    2016-02-01

    A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.

  18. Inferring the source of evaporated waters using stable H and O isotopes

    NASA Astrophysics Data System (ADS)

    Bowen, G. J.; Putman, A.; Brooks, J. R.; Bowling, D. R.; Oerter, E.; Good, S. P.

    2017-12-01

    Stable isotope ratios of H and O are widely used identify the source of water, e.g., in aquifers, river runoff, soils, plant xylem, and plant-based beverages. In situations where the sampled water is partially evaporated, its isotope values will have evolved along an evaporation line (EL) in δ2H/δ18O space, and back-correction along the EL to its intersection with a meteoric water line (MWL) has been used to estimate the source water's isotope ratios. Several challenges and potential pitfalls exist with traditional approaches to this problem, including potential for bias from a commonly used regression-based approach for EL slope estimation and incomplete estimation of uncertainty in most studies. We suggest the value of a model-based approach to EL estimation, and introduce a mathematical framework that eliminates the need to explicitly estimate the EL-MWL intersection, simplifying analysis and facilitating more rigorous uncertainty estimation. We apply this analysis framework to data from 1,000 lakes sampled in EPA's 2007 National Lakes Assessment. We find that data for most lakes is consistent with a water source similar to annual runoff, estimated from monthly precipitation and evaporation within the lake basin. Strong evidence for both summer- and winter-biased sources exists, however, with winter bias pervasive in most snow-prone regions. The new analytical framework should improve the rigor of source-water inference from evaporated samples in ecohydrology and related sciences, and our initial results from U.S. lakes suggest that previous interpretations of lakes as unbiased isotope integrators may only be valid in certain climate regimes.

  19. The economic value of life: linking theory to practice.

    PubMed Central

    Landefeld, J S; Seskin, E P

    1982-01-01

    Human capital estimates of the economic value of life have been routinely used in the past to perform cost-benefit analyses of health programs. Recently, however, serious questions have been raised concerning the conceptual basis for valuing human life by applying these estimates. Most economists writing on these issues tend to agree that a more conceptually correct method to value risks to human life in cost-benefit analyses would be based on individuals.' "willingness to pay" for small changes in their probability of survival. Attempts to implement the willingness-to-pay approach using survey responses or revealed-preference estimates have produced a confusing array of values fraught with statistical problems and measurement difficulties. As a result, economists have searched for a link between willingness to pay and standard human capital estimates and have found that for most individuals a lower bound for valuing risks to life can be based on their willingness to pay to avoid the expected economic losses associated with death. However, while these studies provide support for using individual's private valuation of forgone income in valuing risks to life, it is also clear that standard human capital estimates cannot be used for this purpose without reformulation. After reviewing the major approaches to valuing risks to life, this paper concludes that estimates based on the human capital approach--reformulated using a willingness-to-pay criterion--produce the only clear, consistent, and objective values for use in cost-benefit analyses of policies affecting risks to life. The paper presents the first empirical estimates of such adjusted willingness-to-pay/human capital values. PMID:6803602

  20. 10 CFR 603.560 - Estimate of project expenditures.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Estimate of project expenditures. 603.560 Section 603.560 Energy DEPARTMENT OF ENERGY (CONTINUED) ASSISTANCE REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Fixed-Support Or Expenditure-Based Approach § 603.560 Estimate of project expenditures...

  1. A new statistical approach to climate change detection and attribution

    NASA Astrophysics Data System (ADS)

    Ribes, Aurélien; Zwiers, Francis W.; Azaïs, Jean-Marc; Naveau, Philippe

    2017-01-01

    We propose here a new statistical approach to climate change detection and attribution that is based on additive decomposition and simple hypothesis testing. Most current statistical methods for detection and attribution rely on linear regression models where the observations are regressed onto expected response patterns to different external forcings. These methods do not use physical information provided by climate models regarding the expected response magnitudes to constrain the estimated responses to the forcings. Climate modelling uncertainty is difficult to take into account with regression based methods and is almost never treated explicitly. As an alternative to this approach, our statistical model is only based on the additivity assumption; the proposed method does not regress observations onto expected response patterns. We introduce estimation and testing procedures based on likelihood maximization, and show that climate modelling uncertainty can easily be accounted for. Some discussion is provided on how to practically estimate the climate modelling uncertainty based on an ensemble of opportunity. Our approach is based on the " models are statistically indistinguishable from the truth" paradigm, where the difference between any given model and the truth has the same distribution as the difference between any pair of models, but other choices might also be considered. The properties of this approach are illustrated and discussed based on synthetic data. Lastly, the method is applied to the linear trend in global mean temperature over the period 1951-2010. Consistent with the last IPCC assessment report, we find that most of the observed warming over this period (+0.65 K) is attributable to anthropogenic forcings (+0.67 ± 0.12 K, 90 % confidence range), with a very limited contribution from natural forcings (-0.01± 0.02 K).

  2. Historical HIV incidence modelling in regional subgroups: use of flexible discrete models with penalized splines based on prior curves.

    PubMed

    Greenland, S

    1996-03-15

    This paper presents an approach to back-projection (back-calculation) of human immunodeficiency virus (HIV) person-year infection rates in regional subgroups based on combining a log-linear model for subgroup differences with a penalized spline model for trends. The penalized spline approach allows flexible trend estimation but requires far fewer parameters than fully non-parametric smoothers, thus saving parameters that can be used in estimating subgroup effects. Use of reasonable prior curve to construct the penalty function minimizes the degree of smoothing needed beyond model specification. The approach is illustrated in application to acquired immunodeficiency syndrome (AIDS) surveillance data from Los Angeles County.

  3. Sensorless Load Torque Estimation and Passivity Based Control of Buck Converter Fed DC Motor

    PubMed Central

    Kumar, S. Ganesh; Thilagar, S. Hosimin

    2015-01-01

    Passivity based control of DC motor in sensorless configuration is proposed in this paper. Exact tracking error dynamics passive output feedback control is used for stabilizing the speed of Buck converter fed DC motor under various load torques such as constant type, fan type, propeller type, and unknown load torques. Under load conditions, sensorless online algebraic approach is proposed, and it is compared with sensorless reduced order observer approach. The former produces better response in estimating the load torque. Sensitivity analysis is also performed to select the appropriate control variables. Simulation and experimental results fully confirm the superiority of the proposed approach suggested in this paper. PMID:25893208

  4. Side-by-side ANFIS as a useful tool for estimating correlated thermophysical properties

    NASA Astrophysics Data System (ADS)

    Grieu, Stéphane; Faugeroux, Olivier; Traoré, Adama; Claudet, Bernard; Bodnar, Jean-Luc

    2015-12-01

    In the present paper, an artificial intelligence-based approach dealing with the estimation of correlated thermophysical properties is designed and evaluated. This new and "intelligent" approach makes use of photothermal responses obtained when homogeneous materials are subjected to a light flux. Commonly, gradient-based algorithms are used as parameter estimation techniques. Unfortunately, such algorithms show instabilities leading to non-convergence in case of correlated properties to be estimated from a rebuilt impulse response. So, the main objective of the present work was to simultaneously estimate both the thermal diffusivity and conductivity of homogeneous materials, from front-face or rear-face photothermal responses to pseudo random binary signals. To this end, we used side-by-side neuro-fuzzy systems (adaptive network-based fuzzy inference systems) trained with a hybrid algorithm. We focused on the impact on generalization of both the examples used during training and the fuzzification process. In addition, computation time was a key point to consider. That is why the developed algorithm is computationally tractable and allows both the thermal diffusivity and conductivity of homogeneous materials to be simultaneously estimated with very good accuracy (the generalization error ranges between 4.6% and 6.2%).

  5. Improving PERSIANN-CCS rain estimation using probabilistic approach and multi-sensors information

    NASA Astrophysics Data System (ADS)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.; Kirstetter, P.; Hong, Y.

    2016-12-01

    This presentation discusses the recent implemented approaches to improve the rainfall estimation from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System (PERSIANN-CCS). PERSIANN-CCS is an infrared (IR) based algorithm being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to create a precipitation product in 0.1x0.1degree resolution over the chosen domain 50N to 50S every 30 minutes. Although PERSIANN-CCS has a high spatial and temporal resolution, it overestimates or underestimates due to some limitations.PERSIANN-CCS can estimate rainfall based on the extracted information from IR channels at three different temperature threshold levels (220, 235, and 253k). This algorithm relies only on infrared data to estimate rainfall indirectly from this channel which cause missing the rainfall from warm clouds and false estimation for no precipitating cold clouds. In this research the effectiveness of using other channels of GOES satellites such as visible and water vapors has been investigated. By using multi-sensors the precipitation can be estimated based on the extracted information from multiple channels. Also, instead of using the exponential function for estimating rainfall from cloud top temperature, the probabilistic method has been used. Using probability distributions of precipitation rates instead of deterministic values has improved the rainfall estimation for different type of clouds.

  6. Estimating Geocenter Motion and Changes in the Earth's Dynamic Oblateness from a Statistically Optimal Combination of GRACE Data and Geophysical Models

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Ditmar, P.; Riva, R.

    2016-12-01

    Time-varying gravity field solutions of the GRACE satellite mission enable an observation of Earth's mass transport on a monthly basis since 2002. One of the remaining challenges is how to complement these solutions with sufficiently accurate estimates of very low-degree spherical harmonic coefficients, particularly degree-1 coefficients and C20. An absence or inaccurate estimation of these coefficients may result in strong biases in mass transports estimates. Variations in degree-1 coefficients reflect geocenter motion and variations in the C20coefficients describe changes in the Earth's dynamic oblateness (ΔJ2). In this study, we developed a novel methodology to estimate monthly variations in degree-1 and C20coefficients by combing GRACE data with oceanic mass anomalies (combination approach). Unlike the method by Swenson et al. (2008), the proposed approach exploits noise covariance information of both input datasets and thus produces stochastically optimal solutions. A numerical simulation study is carried out to verify the correctness and performance of the proposed approach. We demonstrate that solutions obtained with the proposed approach have a significantly higher quality, as compared to the method by Swenson et al. Finally, we apply the proposed approach to real monthly GRACE solutions. To evaluate the obtained results, we calculate mass transport time-series over selected regions where minimal mass anomalies are expected. A clear reduction in the RMS of the mass transport time-series (more than 50 %) is observed there when the degree-1 and C20 coefficients obtained with the proposed approach are used. In particular, the seasonal pattern in the mass transport time-series disappears almost entirely. The traditional approach (degree-1 coefficients based on Swenson et al. (2008) and C20 based on SLR data), in contrast, does not reduce that RMS or even makes it larger (e.g., over the Sahara desert). We further show that the degree-1 variations play a major role in the observed improvement. At the same time, the usage of the C20 solutions obtained with the combination approach yields a similar accuracy of mass anomaly estimates, as compared to the results based on SLR analysis. The computed degree-1 and C20 coefficients will be made publicly available.

  7. Applications of physiological bases of ageing to forensic sciences. Estimation of age-at-death.

    PubMed

    C Zapico, Sara; Ubelaker, Douglas H

    2013-03-01

    Age-at-death estimation is one of the main challenges in forensic sciences since it contributes to the identification of individuals. There are many anthropological techniques to estimate the age at death in children and adults. However, in adults this methodology is less accurate and requires population specific references. For that reason, new methodologies have been developed. Biochemical methods are based on the natural process of ageing, which induces different biochemical changes that lead to alterations in cells and tissues. In this review, we describe different attempts to estimate the age in adults based on these changes. Chemical approaches imply modifications in molecules or accumulation of some products. Molecular biology approaches analyze the modifications in DNA and chromosomes. Although the most accurate technique appears to be aspartic acid racemization, it is important to take into account the other techniques because the forensic context and the human remains available will determine the possibility to apply one or another methodology. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Simultaneous versus sequential optimal experiment design for the identification of multi-parameter microbial growth kinetics as a function of temperature.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2010-05-21

    Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  9. Comparison of screening-level and Monte Carlo approaches for wildlife food web exposure modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pastorok, R.; Butcher, M.; LaTier, A.

    1995-12-31

    The implications of using quantitative uncertainty analysis (e.g., Monte Carlo) and site-specific tissue residue data for wildlife exposure modeling were examined with data on trace elements at the Clark Fork River Superfund Site. Exposure of white-tailed deer, red fox, and American kestrel was evaluated using three approaches. First, a screening-level exposure model was based on conservative estimates of exposure parameters, including estimates of dietary residues derived from bioconcentration factors (BCFs) and soil chemistry. A second model without Monte Carlo was based on site-specific data for tissue residues of trace elements (As, Cd, Cu, Pb, Zn) in key dietary species andmore » plausible assumptions for habitat spatial segmentation and other exposure parameters. Dietary species sampled included dominant grasses (tufted hairgrass and redtop), willows, alfalfa, barley, invertebrates (grasshoppers, spiders, and beetles), and deer mice. Third, the Monte Carlo analysis was based on the site-specific residue data and assumed or estimated distributions for exposure parameters. Substantial uncertainties are associated with several exposure parameters, especially BCFS, such that exposure and risk may be greatly overestimated in screening-level approaches. The results of the three approaches are compared with respect to realism, practicality, and data gaps. Collection of site-specific data on trace elements concentrations in plants and animals eaten by the target wildlife receptors is a cost-effective way to obtain realistic estimates of exposure. Implications of the results for exposure and risk estimates are discussed relative to use of wildlife exposure modeling and evaluation of remedial actions at Superfund sites.« less

  10. Modeling trends from North American Breeding Bird Survey data: a spatially explicit approach

    USGS Publications Warehouse

    Bled, Florent; Sauer, John R.; Pardieck, Keith L.; Doherty, Paul; Royle, J. Andy

    2013-01-01

    Population trends, defined as interval-specific proportional changes in population size, are often used to help identify species of conservation interest. Efficient modeling of such trends depends on the consideration of the correlation of population changes with key spatial and environmental covariates. This can provide insights into causal mechanisms and allow spatially explicit summaries at scales that are of interest to management agencies. We expand the hierarchical modeling framework used in the North American Breeding Bird Survey (BBS) by developing a spatially explicit model of temporal trend using a conditional autoregressive (CAR) model. By adopting a formal spatial model for abundance, we produce spatially explicit abundance and trend estimates. Analyses based on large-scale geographic strata such as Bird Conservation Regions (BCR) can suffer from basic imbalances in spatial sampling. Our approach addresses this issue by providing an explicit weighting based on the fundamental sample allocation unit of the BBS. We applied the spatial model to three species from the BBS. Species have been chosen based upon their well-known population change patterns, which allows us to evaluate the quality of our model and the biological meaning of our estimates. We also compare our results with the ones obtained for BCRs using a nonspatial hierarchical model (Sauer and Link 2011). Globally, estimates for mean trends are consistent between the two approaches but spatial estimates provide much more precise trend estimates in regions on the edges of species ranges that were poorly estimated in non-spatial analyses. Incorporating a spatial component in the analysis not only allows us to obtain relevant and biologically meaningful estimates for population trends, but also enables us to provide a flexible framework in order to obtain trend estimates for any area.

  11. Causes of systematic over- or underestimation of low streamflows by use of index-streamgage approaches in the United States

    USGS Publications Warehouse

    Eng, K.; Kiang, J.E.; Chen, Y.-Y.; Carlisle, D.M.; Granato, G.E.

    2011-01-01

    Low-flow characteristics can be estimated by multiple linear regressions or the index-streamgage approach. The latter transfers streamflow information from a hydrologically similar, continuously gaged basin ('index streamgage') to one with a very limited streamflow record, but often results in biased estimates. The application of the index-streamgage approach can be generalized into three steps: (1) selection of streamflow information of interest, (2) definition of hydrologic similarity and selection of index streamgage, and (3) application of an information-transfer approach. Here, we explore the effects of (1) the range of streamflow values, (2) the areal density of streamgages, and (3) index-streamgage selection criteria on the bias of three information-transfer approaches on estimates of the 7-day, 10-year minimum streamflow (Q7, 10). The three information-transfer approaches considered are maintenance of variance extension, base-flow correlation, and ratio of measured to concurrent gaged streamflow (Q-ratio invariance). Our results for 1120 streamgages throughout the United States suggest that only a small portion of the total bias in estimated streamflow values is explained by the areal density of the streamgages and the hydrologic similarity between the two basins. However, restricting the range of streamflow values used in the index-streamgage approach reduces the bias of estimated Q7, 10 values substantially. Importantly, estimated Q7, 10 values are heavily biased when the observed Q7, 10 values are near zero. Results of the analysis also showed that Q7, 10 estimates from two of the three index-streamgage approaches have lower root-mean-square error values than estimates derived from multiple regressions for the large regions considered in this study.

  12. Taking the Missing Propensity Into Account When Estimating Competence Scores

    PubMed Central

    Pohl, Steffi; Carstensen, Claus H.

    2014-01-01

    When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically made when using these models: (1) The missing propensity is unidimensional and (2) the missing propensity and the ability are bivariate normally distributed. These assumptions may, however, be violated in real data sets and could, thus, pose a threat to the validity of this approach. The present study focuses on modeling competencies in various domains, using data from a school sample (N = 15,396) and an adult sample (N = 7,256) from the National Educational Panel Study. Our interest was to investigate whether violations of unidimensionality and the normal distribution assumption severely affect the performance of the model-based approach in terms of differences in ability estimates. We propose a model with a competence dimension, a unidimensional missing propensity and a distributional assumption more flexible than a multivariate normal. Using this model for ability estimation results in different ability estimates compared with a model ignoring missing responses. Implications for ability estimation in large-scale assessments are discussed. PMID:29795844

  13. An ecoregional approach to the economic valuation of land- and water-based recreation in the United States

    Treesearch

    Gajana Bhat; John Bergsrom; R. Jeff Teasley

    1998-01-01

    This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such...

  14. The predictive validity of quality of evidence grades for the stability of effect estimates was low: a meta-epidemiological study.

    PubMed

    Gartlehner, Gerald; Dobrescu, Andreea; Evans, Tammeka Swinson; Bann, Carla; Robinson, Karen A; Reston, James; Thaler, Kylie; Skelly, Andrea; Glechner, Anna; Peterson, Kimberly; Kien, Christina; Lohr, Kathleen N

    2016-02-01

    To determine the predictive validity of the U.S. Evidence-based Practice Center (EPC) approach to GRADE (Grading of Recommendations Assessment, Development and Evaluation). Based on Cochrane reports with outcomes graded as high quality of evidence (QOE), we prepared 160 documents which represented different levels of QOE. Professional systematic reviewers dually graded the QOE. For each document, we determined whether estimates were concordant with high QOE estimates of the Cochrane reports. We compared the observed proportion of concordant estimates with the expected proportion from an international survey. To determine the predictive validity, we used the Hosmer-Lemeshow test to assess calibration and the C (concordance) index to assess discrimination. The predictive validity of the EPC approach to GRADE was limited. Estimates graded as high QOE were less likely, estimates graded as low or insufficient QOE more likely to remain stable than expected. The EPC approach to GRADE could not reliably predict the likelihood that individual bodies of evidence remain stable as new evidence becomes available. C-indices ranged between 0.56 (95% CI, 0.47 to 0.66) and 0.58 (95% CI, 0.50 to 0.67) indicating a low discriminatory ability. The limited predictive validity of the EPC approach to GRADE seems to reflect a mismatch between expected and observed changes in treatment effects as bodies of evidence advance from insufficient to high QOE. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Cover estimation and payload location using Markov random fields

    NASA Astrophysics Data System (ADS)

    Quach, Tu-Thach

    2014-02-01

    Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.

  16. A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events

    NASA Astrophysics Data System (ADS)

    Zorzetto, E.; Marani, M.

    2017-12-01

    The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.

  17. Validation of ELDO approaches for retrospective assessment of cumulative eye lens doses of interventional cardiologists-results from DoReMi project.

    PubMed

    Domienik, J; Farah, J; Struelens, L

    2016-12-01

    The first validation results of the two approaches developed in the ELDO project for retrospective assessment of eye lens doses for interventional cardiologists (ICs) are presented in this paper. The first approach (a) is based on both the readings from the routine whole body dosimeter worn above the lead apron and procedure-dependent conversion coefficients, while the second approach (b) is based on detailed information related to the occupational exposure history of the ICs declared in a questionnaire and eye lens dose records obtained from the relevant literature. The latter approach makes use of various published eye lens doses per procedure as well as the appropriate correction factors which account for the use of radiation protective tools designed to protect the eye lens. To validate both methodologies, comprehensive measurements were performed in several Polish clinics among recruited physicians. Two dosimeters measuring whole body and eye lens doses were worn by every physician for at least two months. The estimated cumulative eye lens doses, calculated from both approaches, were then compared against the measured eye lens dose value for every physician separately. Both approaches results in comparable estimates of eye lens doses and tend to overestimate rather than underestimate the eye lens doses. The measured and estimated doses do not differ, on average, by a factor higher than 2.0 in 85% and 62% of the cases used to validate approach (a) and (b), respectively. In specific cases, however, the estimated doses differ from the measured ones by as much as a factor of 2.7 and 5.1 for method (a) and (b), respectively. As such, the two approaches can be considered accurate when retrospectively estimating the eye lens doses for ICs and will be of great benefit for ongoing epidemiological studies.

  18. Hubble Space Telescope Angular Velocity Estimation During the Robotic Servicing Mission

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Queen, Steven Z.; VanEepoel, John M.; Sanner, Robert M.

    2005-01-01

    In 2004 NASA began investigation of a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would require estimates of the HST attitude and rates in order to achieve a capture by the proposed Hubble robotic vehicle (HRV). HRV was to be equipped with vision-based sensors, capable of estimating the relative attitude between HST and HRV. The inertial HST attitude is derived from the measured relative attitude and the HRV computed inertial attitude. However, the relative rate between HST and HRV cannot be measured directly. Therefore, the HST rate with respect to inertial space is not known. Two approaches are developed to estimate the HST rates. Both methods utilize the measured relative attitude and the HRV inertial attitude and rates. First, a non-linear estimator is developed. The nonlinear approach estimates the HST rate through an estimation of the inertial angular momentum. Second, a linearized approach is developed. The linearized approach is a pseudo-linear Kalman filter. Simulation test results for both methods are given. Even though the development began as an application for the HST robotic servicing mission, the methods presented are applicable to any rendezvous/capture mission involving a non-cooperative target spacecraft.

  19. Comparing different stimulus configurations for population receptive field mapping in human fMRI

    PubMed Central

    Alvarez, Ivan; de Haas, Benjamin; Clark, Chris A.; Rees, Geraint; Schwarzkopf, D. Samuel

    2015-01-01

    Population receptive field (pRF) mapping is a widely used approach to measuring aggregate human visual receptive field properties by recording non-invasive signals using functional MRI. Despite growing interest, no study to date has systematically investigated the effects of different stimulus configurations on pRF estimates from human visual cortex. Here we compared the effects of three different stimulus configurations on a model-based approach to pRF estimation: size-invariant bars and eccentricity-scaled bars defined in Cartesian coordinates and traveling along the cardinal axes, and a novel simultaneous “wedge and ring” stimulus defined in polar coordinates, systematically covering polar and eccentricity axes. We found that the presence or absence of eccentricity scaling had a significant effect on goodness of fit and pRF size estimates. Further, variability in pRF size estimates was directly influenced by stimulus configuration, particularly for higher visual areas including V5/MT+. Finally, we compared eccentricity estimation between phase-encoded and model-based pRF approaches. We observed a tendency for more peripheral eccentricity estimates using phase-encoded methods, independent of stimulus size. We conclude that both eccentricity scaling and polar rather than Cartesian stimulus configuration are important considerations for optimal experimental design in pRF mapping. While all stimulus configurations produce adequate estimates, simultaneous wedge and ring stimulation produced higher fit reliability, with a significant advantage in reduced acquisition time. PMID:25750620

  20. A Graphical Approach to Item Analysis. Research Report. ETS RR-04-10

    ERIC Educational Resources Information Center

    Livingston, Samuel A.; Dorans, Neil J.

    2004-01-01

    This paper describes an approach to item analysis that is based on the estimation of a set of response curves for each item. The response curves show, at a glance, the difficulty and the discriminating power of the item and the popularity of each distractor, at any level of the criterion variable (e.g., total score). The curves are estimated by…

  1. Assessing assay agreement estimation for multiple left-censored data: a multiple imputation approach.

    PubMed

    Lapidus, Nathanael; Chevret, Sylvie; Resche-Rigon, Matthieu

    2014-12-30

    Agreement between two assays is usually based on the concordance correlation coefficient (CCC), estimated from the means, standard deviations, and correlation coefficient of these assays. However, such data will often suffer from left-censoring because of lower limits of detection of these assays. To handle such data, we propose to extend a multiple imputation approach by chained equations (MICE) developed in a close setting of one left-censored assay. The performance of this two-step approach is compared with that of a previously published maximum likelihood estimation through a simulation study. Results show close estimates of the CCC by both methods, although the coverage is improved by our MICE proposal. An application to cytomegalovirus quantification data is provided. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Product line cost estimation: a standard cost approach.

    PubMed

    Cooper, J C; Suver, J D

    1988-04-01

    Product line managers often must make decisions based on inaccurate cost information. A method is needed to determine costs more accurately. By using a standard costing model, product line managers can better estimate the cost of intermediate and end products, and hence better estimate the costs of the product line.

  3. Aqueous and Tissue Residue-Based Interspecies Correlation Estimation Models Provide Conservative Hazard Estimates for Aromatic Compounds

    EPA Science Inventory

    Interspecies correlation estimation (ICE) models were developed for 30 nonpolar aromatic compounds to allow comparison of prediction accuracy between 2 data compilation approaches. Type 1 models used data combined across studies, and type 2 models used data combined only within s...

  4. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering.

    PubMed

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  5. Comparison of methods for estimating the attributable risk in the context of survival analysis.

    PubMed

    Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M

    2017-01-23

    The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.

  6. Propensity score analysis with partially observed covariates: How should multiple imputation be used?

    PubMed

    Leyrat, Clémence; Seaman, Shaun R; White, Ian R; Douglas, Ian; Smeeth, Liam; Kim, Joseph; Resche-Rigon, Matthieu; Carpenter, James R; Williamson, Elizabeth J

    2017-01-01

    Inverse probability of treatment weighting is a popular propensity score-based approach to estimate marginal treatment effects in observational studies at risk of confounding bias. A major issue when estimating the propensity score is the presence of partially observed covariates. Multiple imputation is a natural approach to handle missing data on covariates: covariates are imputed and a propensity score analysis is performed in each imputed dataset to estimate the treatment effect. The treatment effect estimates from each imputed dataset are then combined to obtain an overall estimate. We call this method MIte. However, an alternative approach has been proposed, in which the propensity scores are combined across the imputed datasets (MIps). Therefore, there are remaining uncertainties about how to implement multiple imputation for propensity score analysis: (a) should we apply Rubin's rules to the inverse probability of treatment weighting treatment effect estimates or to the propensity score estimates themselves? (b) does the outcome have to be included in the imputation model? (c) how should we estimate the variance of the inverse probability of treatment weighting estimator after multiple imputation? We studied the consistency and balancing properties of the MIte and MIps estimators and performed a simulation study to empirically assess their performance for the analysis of a binary outcome. We also compared the performance of these methods to complete case analysis and the missingness pattern approach, which uses a different propensity score model for each pattern of missingness, and a third multiple imputation approach in which the propensity score parameters are combined rather than the propensity scores themselves (MIpar). Under a missing at random mechanism, complete case and missingness pattern analyses were biased in most cases for estimating the marginal treatment effect, whereas multiple imputation approaches were approximately unbiased as long as the outcome was included in the imputation model. Only MIte was unbiased in all the studied scenarios and Rubin's rules provided good variance estimates for MIte. The propensity score estimated in the MIte approach showed good balancing properties. In conclusion, when using multiple imputation in the inverse probability of treatment weighting context, MIte with the outcome included in the imputation model is the preferred approach.

  7. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  8. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems.

    PubMed

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  9. Remote sensing and modelling of vegetation dynamics for early estimation and spatial analysis of grain yields in semiarid context in central Tunisia

    NASA Astrophysics Data System (ADS)

    Chahbi, Aicha; Zribi, Mehrez; Lili-Chabaane, Zohra

    2016-04-01

    In arid and semi-arid areas, population growth, urbanization, food security and climate change have an impact on agriculture in general and particular on the cereal production. Therefore to improve food security in arid countries, crop canopy monitoring and yield forecasting cereals are needed. Many models, based on the use of remote sensing or agro-meteorological models, have been developed to estimate the biomass and grain yield of cereals. Through the use of a rich database, acquired over a period of two years for more than 80 test fields, and from optical satellite SPOT/HRV images, the aim of the present study is to evaluate the feasibility of two yield prediction approaches. The first approach is based on the application of the semi-empirical growth model SAFY, developed to simulate the dynamics of the LAI and the grain yield, at the field scale. The model is able to reproduce the time evolution of the leaf area index of all fields with acceptable error. However, an inter-comparison between ground yield measurements and SAFY model simulations reveals that the yields are under-estimated by this model. We can explain the limits of the semi-empirical model SAFY by its simplicity and also by various factors that were not considered (fertilization, irrigation,...). To improve the yield estimation, a new approach is proposed: the grain yield is estimated in function of the LAI in the growth period between 25 March and 5 April. The LAI of this period is estimated by SAFY model. A linear relationship is developed between the measured grain yield and the LAI area of the maximum growth period.This approach is robust, the measured and estimated grain yields are well correlated. Following the validation of this approach, yield estimations are proposed for the entire studied site using the SPOT/HRV images.

  10. A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.

    PubMed

    Faya, Paul; Stamey, James D; Seaman, John W

    2017-01-01

    For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.

  11. Performance of a proportion-based approach to meta-analytic moderator estimation: results from Monte Carlo simulations.

    PubMed

    Aguirre-Urreta, Miguel I; Ellis, Michael E; Sun, Wenying

    2012-03-01

    This research investigates the performance of a proportion-based approach to meta-analytic moderator estimation through a series of Monte Carlo simulations. This approach is most useful when the moderating potential of a categorical variable has not been recognized in primary research and thus heterogeneous groups have been pooled together as a single sample. Alternative scenarios representing different distributions of group proportions are examined along with varying numbers of studies, subjects per study, and correlation combinations. Our results suggest that the approach is largely unbiased in its estimation of the magnitude of between-group differences and performs well with regard to statistical power and type I error. In particular, the average percentage bias of the estimated correlation for the reference group is positive and largely negligible, in the 0.5-1.8% range; the average percentage bias of the difference between correlations is also minimal, in the -0.1-1.2% range. Further analysis also suggests both biases decrease as the magnitude of the underlying difference increases, as the number of subjects in each simulated primary study increases, and as the number of simulated studies in each meta-analysis increases. The bias was most evident when the number of subjects and the number of studies were the smallest (80 and 36, respectively). A sensitivity analysis that examines its performance in scenarios down to 12 studies and 40 primary subjects is also included. This research is the first that thoroughly examines the adequacy of the proportion-based approach. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Inverse approaches with lithologic information for a regional groundwater system in southwest Kansas

    USGS Publications Warehouse

    Tsou, Ming‐shu; Perkins, S.P.; Zhan, X.; Whittemore, Donald O.; Zheng, Lingyun

    2006-01-01

    Two practical approaches incorporating lithologic information for groundwater modeling calibration are presented to estimate distributed, cell-based hydraulic conductivity. The first approach is to estimate optimal hydraulic conductivities for geological materials by incorporating thickness distribution of materials into inverse modeling. In the second approach, residuals for the groundwater model solution are minimized according to a globalized Newton method with the aid of a Geographic Information System (GIS) to calculate a cell-wise distribution of hydraulic conductivity. Both approaches honor geologic data and were effective in characterizing the heterogeneity of a regional groundwater modeling system in southwest Kansas. ?? 2005 Elsevier Ltd All rights reserved.

  13. A model-based approach to estimating forest area

    Treesearch

    Ronald E. McRoberts

    2006-01-01

    A logistic regression model based on forest inventory plot data and transformations of Landsat Thematic Mapper satellite imagery was used to predict the probability of forest for 15 study areas in Indiana, USA, and 15 in Minnesota, USA. Within each study area, model-based estimates of forest area were obtained for circular areas with radii of 5 km, 10 km, and 15 km and...

  14. Multiple Damage Progression Paths in Model-Based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Goebel, Kai Frank

    2011-01-01

    Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active

  15. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  16. Bayesian estimation of the discrete coefficient of determination.

    PubMed

    Chen, Ting; Braga-Neto, Ulisses M

    2016-12-01

    The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.

  17. Arterial Mechanical Motion Estimation Based on a Semi-Rigid Body Deformation Approach

    PubMed Central

    Guzman, Pablo; Hamarneh, Ghassan; Ros, Rafael; Ros, Eduardo

    2014-01-01

    Arterial motion estimation in ultrasound (US) sequences is a hard task due to noise and discontinuities in the signal derived from US artifacts. Characterizing the mechanical properties of the artery is a promising novel imaging technique to diagnose various cardiovascular pathologies and a new way of obtaining relevant clinical information, such as determining the absence of dicrotic peak, estimating the Augmentation Index (AIx), the arterial pressure or the arterial stiffness. One of the advantages of using US imaging is the non-invasive nature of the technique unlike Intra Vascular Ultra Sound (IVUS) or angiography invasive techniques, plus the relative low cost of the US units. In this paper, we propose a semi rigid deformable method based on Soft Bodies dynamics realized by a hybrid motion approach based on cross-correlation and optical flow methods to quantify the elasticity of the artery. We evaluate and compare different techniques (for instance optical flow methods) on which our approach is based. The goal of this comparative study is to identify the best model to be used and the impact of the accuracy of these different stages in the proposed method. To this end, an exhaustive assessment has been conducted in order to decide which model is the most appropriate for registering the variation of the arterial diameter over time. Our experiments involved a total of 1620 evaluations within nine simulated sequences of 84 frames each and the estimation of four error metrics. We conclude that our proposed approach obtains approximately 2.5 times higher accuracy than conventional state-of-the-art techniques. PMID:24871987

  18. Frequency Estimator Performance for a Software-Based Beacon Receiver

    NASA Technical Reports Server (NTRS)

    Zemba, Michael J.; Morse, Jacquelynne R.; Nessel, James A.

    2014-01-01

    As propagation terminals have evolved, their design has trended more toward a software-based approach that facilitates convenient adjustment and customization of the receiver algorithms. One potential improvement is the implementation of a frequency estimation algorithm, through which the primary frequency component of the received signal can be estimated with a much greater resolution than with a simple peak search of the FFT spectrum. To select an estimator for usage in a Q/V-band beacon receiver, analysis of six frequency estimators was conducted to characterize their effectiveness as they relate to beacon receiver design.

  19. Modeling the Residual Strength of a Fibrous Composite Using the Residual Daniels Function

    NASA Astrophysics Data System (ADS)

    Paramonov, Yu.; Cimanis, V.; Varickis, S.; Kleinhofs, M.

    2016-09-01

    The concept of a residual Daniels function (RDF) is introduced. Together with the concept of Daniels sequence, the RDF is used for estimating the residual (after some preliminary fatigue loading) static strength of a unidirectional fibrous composite (UFC) and its S-N curve on the bases of test data. Usually, the residual strength is analyzed on the basis of a known S-N curve. In our work, an inverse approach is used: the S-N curve is derived from an analysis of the residual strength. This approach gives a good qualitive description of the process of decreasing residual strength and explanes the existence of the fatigue limit. The estimates of parameters of the corresponding regression model can be interpreted as estimates of parameters of the local strength of components of the UFC. In order to approach the quantitative experimental estimates of the fatigue life, some ideas based on the mathematics of the semiMarkovian process are employed. Satisfactory results in processing experimental data on the fatigue life and residual strength of glass/epoxy laminates are obtained.

  20. Improved infrared precipitation estimation approaches based on k-means clustering: Application to north Algeria using MSG-SEVIRI satellite data

    NASA Astrophysics Data System (ADS)

    Mokdad, Fatiha; Haddad, Boualem

    2017-06-01

    In this paper, two new infrared precipitation estimation approaches based on the concept of k-means clustering are first proposed, named the NAW-Kmeans and the GPI-Kmeans methods. Then, they are adapted to the southern Mediterranean basin, where the subtropical climate prevails. The infrared data (10.8 μm channel) acquired by MSG-SEVIRI sensor in winter and spring 2012 are used. Tests are carried out in eight areas distributed over northern Algeria: Sebra, El Bordj, Chlef, Blida, Bordj Menael, Sidi Aich, Beni Ourthilane, and Beni Aziz. The validation is performed by a comparison of the estimated rainfalls to rain gauges observations collected by the National Office of Meteorology in Dar El Beida (Algeria). Despite the complexity of the subtropical climate, the obtained results indicate that the NAW-Kmeans and the GPI-Kmeans approaches gave satisfactory results for the considered rain rates. Also, the proposed schemes lead to improvement in precipitation estimation performance when compared to the original algorithms NAW (Nagri, Adler, and Wetzel) and GPI (GOES Precipitation Index).

  1. Prevalence of autosomal dominant polycystic kidney disease in the European Union.

    PubMed

    Willey, Cynthia J; Blais, Jaime D; Hall, Anthony K; Krasa, Holly B; Makin, Andrew J; Czerwiec, Frank S

    2017-08-01

    Autosomal dominant polycystic kidney disease (ADPKD) is a leading cause of end-stage renal disease, but estimates of its prevalence vary by >10-fold. The objective of this study was to examine the public health impact of ADPKD in the European Union (EU) by estimating minimum prevalence (point prevalence of known cases) and screening prevalence (minimum prevalence plus cases expected after population-based screening). A review of the epidemiology literature from January 1980 to February 2015 identified population-based studies that met criteria for methodological quality. These examined large German and British populations, providing direct estimates of minimum prevalence and screening prevalence. In a second approach, patients from the 2012 European Renal Association‒European Dialysis and Transplant Association (ERA-EDTA) Registry and literature-based inflation factors that adjust for disease severity and screening yield were used to estimate prevalence across 19 EU countries (N = 407 million). Population-based studies yielded minimum prevalences of 2.41 and 3.89/10 000, respectively, and corresponding estimates of screening prevalences of 3.3 and 4.6/10 000. A close correspondence existed between estimates in countries where both direct and registry-derived methods were compared, which supports the validity of the registry-based approach. Using the registry-derived method, the minimum prevalence was 3.29/10 000 (95% confidence interval 3.27-3.30), and if ADPKD screening was implemented in all countries, the expected prevalence was 3.96/10 000 (3.94-3.98). ERA-EDTA-based prevalence estimates and application of a uniform definition of prevalence to population-based studies consistently indicate that the ADPKD point prevalence is <5/10 000, the threshold for rare disease in the EU. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA.

  2. Telemetry-based mortality estimates of juvenile spot in two North Carolina estuarine creeks

    USGS Publications Warehouse

    Friedl, Sarah E.; Buckel, Jeffery A.; Hightower, Joseph E.; Scharf, Frederick S.; Pollock, Kenneth H.

    2013-01-01

    We estimated natural mortality rates (M) of age-1 Spot Leiostomus xanthurus by using a sonic telemetry approach. Sonic transmitters were surgically implanted into a total of 123 age-1 Spot in two North Carolina estuarine creeks during spring 2009 and 2010, and the fish were monitored by using a stationary acoustic receiver array and manual tracking. Fates of telemetered Spot were inferred based on telemetry information from estimated locations and swimming speeds. Potential competitors of age-1 Spot were assessed through simultaneous otter trawl sampling, while potential predators of Spot were collected using gill nets and trammel nets. The number of inferred natural mortalities was zero in 2009 (based on 29 telemetered Spot at risk) and four in 2010 (based on 52 fish at risk), with fish being at risk for up to about 70 d each year. Catches of potential competitors or predators did not differ between years, and age-1 Spot were not found in analyzed stomach contents of potential predators. Our estimated 30-d M of 0.03 (95% credible interval = 0.01–0.07) was lower than that predicted from weight-based (M = 0.07) and life-history-based (M = 0.06–0.36) estimates. Our field-based estimate of M for age-1 Spot in this estuarine system can assist in the assessment and management of Spot by allowing a direct comparison with M-values predicted from fish size or life history characteristics. The field telemetry and statistical analysis techniques developed here provide guidance for future telemetry studies of relatively small fish in open, dynamic habitat systems, as they highlight strengths and weaknesses of using a telemetry approach to estimate M.

  3. Multi-atlas and label fusion approach for patient-specific MRI based skull estimation.

    PubMed

    Torrado-Carvajal, Angel; Herraiz, Joaquin L; Hernandez-Tamames, Juan A; San Jose-Estepar, Raul; Eryaman, Yigitcan; Rozenholc, Yves; Adalsteinsson, Elfar; Wald, Lawrence L; Malpica, Norberto

    2016-04-01

    MRI-based skull segmentation is a useful procedure for many imaging applications. This study describes a methodology for automatic segmentation of the complete skull from a single T1-weighted volume. The skull is estimated using a multi-atlas segmentation approach. Using a whole head computed tomography (CT) scan database, the skull in a new MRI volume is detected by nonrigid image registration of the volume to every CT, and combination of the individual segmentations by label-fusion. We have compared Majority Voting, Simultaneous Truth and Performance Level Estimation (STAPLE), Shape Based Averaging (SBA), and the Selective and Iterative Method for Performance Level Estimation (SIMPLE) algorithms. The pipeline has been evaluated quantitatively using images from the Retrospective Image Registration Evaluation database (reaching an overlap of 72.46 ± 6.99%), a clinical CT-MR dataset (maximum overlap of 78.31 ± 6.97%), and a whole head CT-MRI pair (maximum overlap 78.68%). A qualitative evaluation has also been performed on MRI acquisition of volunteers. It is possible to automatically segment the complete skull from MRI data using a multi-atlas and label fusion approach. This will allow the creation of complete MRI-based tissue models that can be used in electromagnetic dosimetry applications and attenuation correction in PET/MR. © 2015 Wiley Periodicals, Inc.

  4. Integrated traffic conflict model for estimating crash modification factors.

    PubMed

    Shahdah, Usama; Saccomanno, Frank; Persaud, Bhagwant

    2014-10-01

    Crash modification factors (CMFs) for road safety treatments are usually obtained through observational models based on reported crashes. Observational Bayesian before-and-after methods have been applied to obtain more precise estimates of CMFs by accounting for the regression-to-the-mean bias inherent in naive methods. However, sufficient crash data reported over an extended period of time are needed to provide reliable estimates of treatment effects, a requirement that can be a challenge for certain types of treatment. In addition, these studies require that sites analyzed actually receive the treatment to which the CMF pertains. Another key issue with observational approaches is that they are not causal in nature, and as such, cannot provide a sound "behavioral" rationale for the treatment effect. Surrogate safety measures based on high risk vehicle interactions and traffic conflicts have been proposed to address this issue by providing a more "causal perspective" on lack of safety for different road and traffic conditions. The traffic conflict approach has been criticized, however, for lacking a formal link to observed and verified crashes, a difficulty that this paper attempts to resolve by presenting and investigating an alternative approach for estimating CMFs using simulated conflicts that are linked formally to observed crashes. The integrated CMF estimates are compared to estimates from an empirical Bayes (EB) crash-based before-and-after analysis for the same sample of treatment sites. The treatment considered involves changing left turn signal priority at Toronto signalized intersections from permissive to protected-permissive. The results are promising in that the proposed integrated method yields CMFs that closely match those obtained from the crash-based EB before-and-after analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Evaluating Exposure-Response Associations for Non-Hodgkin Lymphoma with Varying Methods of Assigning Cumulative Benzene Exposure in the Shanghai Women's Health Study.

    PubMed

    Friesen, Melissa C; Bassig, Bryan A; Vermeulen, Roel; Shu, Xiao-Ou; Purdue, Mark P; Stewart, Patricia A; Xiang, Yong-Bing; Chow, Wong-Ho; Ji, Bu-Tian; Yang, Gong; Linet, Martha S; Hu, Wei; Gao, Yu-Tang; Zheng, Wei; Rothman, Nathaniel; Lan, Qing

    2017-01-01

    To provide insight into the contributions of exposure measurements to job exposure matrices (JEMs), we examined the robustness of an association between occupational benzene exposure and non-Hodgkin lymphoma (NHL) to varying exposure assessment methods. NHL risk was examined in a prospective population-based cohort of 73087 women in Shanghai. A mixed-effects model that combined a benzene JEM with >60000 short-term, area benzene inspection measurements was used to derive two sets of measurement-based benzene estimates: 'job/industry-specific' estimates (our presumed best approach) were derived from the model's fixed effects (year, JEM intensity rating) and random effects (occupation, industry); 'calibrated JEM' estimates were derived using only the fixed effects. 'Uncalibrated JEM' (using the ordinal JEM ratings) and exposure duration estimates were also calculated. Cumulative exposure for each subject was calculated for each approach based on varying exposure definitions defined using the JEM's probability ratings. We examined the agreement between the cumulative metrics and evaluated changes in the benzene-NHL associations. For our primary exposure definition, the job/industry-specific estimates were moderately to highly correlated with all other approaches (Pearson correlation 0.61-0.89; Spearman correlation > 0.99). All these metrics resulted in statistically significant exposure-response associations for NHL, with negligible gain in model fit from using measurement-based estimates. Using more sensitive or specific exposure definitions resulted in elevated but non-significant associations. The robust associations observed here with varying benzene assessment methods provide support for a benzene-NHL association. While incorporating exposure measurements did not improve model fit, the measurements allowed us to derive quantitative exposure-response curves. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2017.

  6. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions

    PubMed Central

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation. PMID:26150807

  7. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    PubMed

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation.

  8. Improving credibility and transparency of conservation impact evaluations through the partial identification approach.

    PubMed

    McConnachie, Matthew M; Romero, Claudia; Ferraro, Paul J; van Wilgen, Brian W

    2016-04-01

    The fundamental challenge of evaluating the impact of conservation interventions is that researchers must estimate the difference between the outcome after an intervention occurred and what the outcome would have been without it (counterfactual). Because the counterfactual is unobservable, researchers must make an untestable assumption that some units (e.g., organisms or sites) that were not exposed to the intervention can be used as a surrogate for the counterfactual (control). The conventional approach is to make a point estimate (i.e., single number along with a confidence interval) of impact, using, for example, regression. Point estimates provide powerful conclusions, but in nonexperimental contexts they depend on strong assumptions about the counterfactual that often lack transparency and credibility. An alternative approach, called partial identification (PI), is to first estimate what the counterfactual bounds would be if the weakest possible assumptions were made. Then, one narrows the bounds by using stronger but credible assumptions based on an understanding of why units were selected for the intervention and how they might respond to it. We applied this approach and compared it with conventional approaches by estimating the impact of a conservation program that removed invasive trees in part of the Cape Floristic Region. Even when we used our largest PI impact estimate, the program's control costs were 1.4 times higher than previously estimated. PI holds promise for applications in conservation science because it encourages researchers to better understand and account for treatment selection biases; can offer insights into the plausibility of conventional point-estimate approaches; could reduce the problem of advocacy in science; might be easier for stakeholders to agree on a bounded estimate than a point estimate where impacts are contentious; and requires only basic arithmetic skills. © 2015 Society for Conservation Biology.

  9. Cumulus cloud base height estimation from high spatial resolution Landsat data - A Hough transform approach

    NASA Technical Reports Server (NTRS)

    Berendes, Todd; Sengupta, Sailes K.; Welch, Ron M.; Wielicki, Bruce A.; Navar, Murgesh

    1992-01-01

    A semiautomated methodology is developed for estimating cumulus cloud base heights on the basis of high spatial resolution Landsat MSS data, using various image-processing techniques to match cloud edges with their corresponding shadow edges. The cloud base height is then estimated by computing the separation distance between the corresponding generalized Hough transform reference points. The differences between the cloud base heights computed by these means and a manual verification technique are of the order of 100 m or less; accuracies of 50-70 m may soon be possible via EOS instruments.

  10. A new Bayesian recursive technique for parameter estimation

    NASA Astrophysics Data System (ADS)

    Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis

    2006-08-01

    The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.

  11. Trajectory-based visual localization in underwater surveying missions.

    PubMed

    Burguera, Antoni; Bonin-Font, Francisco; Oliver, Gabriel

    2015-01-14

    We present a new vision-based localization system applied to an autonomous underwater vehicle (AUV) with limited sensing and computation capabilities. The traditional EKF-SLAM approaches are usually expensive in terms of execution time; the approach presented in this paper strengthens this method by adopting a trajectory-based schema that reduces the computational requirements. The pose of the vehicle is estimated using an extended Kalman filter (EKF), which predicts the vehicle motion by means of a visual odometer and corrects these predictions using the data associations (loop closures) between the current frame and the previous ones. One of the most important steps in this procedure is the image registration method, as it reinforces the data association and, thus, makes it possible to close loops reliably. Since the use of standard EKFs entail linearization errors that can distort the vehicle pose estimations, the approach has also been tested using an iterated Kalman filter (IEKF). Experiments have been conducted using a real underwater vehicle in controlled scenarios and in shallow sea waters, showing an excellent performance with very small errors, both in the vehicle pose and in the overall trajectory estimates.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelbe, David; Oak Ridge National Lab.; van Aardt, Jan

    Terrestrial laser scanning has demonstrated increasing potential for rapid comprehensive measurement of forest structure, especially when multiple scans are spatially registered in order to reduce the limitations of occlusion. Although marker-based registration techniques (based on retro-reflective spherical targets) are commonly used in practice, a blind marker-free approach is preferable, insofar as it supports rapid operational data acquisition. To support these efforts, we extend the pairwise registration approach of our earlier work, and develop a graph-theoretical framework to perform blind marker-free global registration of multiple point cloud data sets. Pairwise pose estimates are weighted based on their estimated error, in ordermore » to overcome pose conflict while exploiting redundant information and improving precision. The proposed approach was tested for eight diverse New England forest sites, with 25 scans collected at each site. Quantitative assessment was provided via a novel embedded confidence metric, with a mean estimated root-mean-square error of 7.2 cm and 89% of scans connected to the reference node. Lastly, this paper assesses the validity of the embedded multiview registration confidence metric and evaluates the performance of the proposed registration algorithm.« less

  13. A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment

    NASA Astrophysics Data System (ADS)

    Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.

    2017-12-01

    Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.

  14. Joint sparsity based heterogeneous data-level fusion for target detection and estimation

    NASA Astrophysics Data System (ADS)

    Niu, Ruixin; Zulch, Peter; Distasio, Marcello; Blasch, Erik; Shen, Dan; Chen, Genshe

    2017-05-01

    Typical surveillance systems employ decision- or feature-level fusion approaches to integrate heterogeneous sensor data, which are sub-optimal and incur information loss. In this paper, we investigate data-level heterogeneous sensor fusion. Since the sensors monitor the common targets of interest, whose states can be determined by only a few parameters, it is reasonable to assume that the measurement domain has a low intrinsic dimensionality. For heterogeneous sensor data, we develop a joint-sparse data-level fusion (JSDLF) approach based on the emerging joint sparse signal recovery techniques by discretizing the target state space. This approach is applied to fuse signals from multiple distributed radio frequency (RF) signal sensors and a video camera for joint target detection and state estimation. The JSDLF approach is data-driven and requires minimum prior information, since there is no need to know the time-varying RF signal amplitudes, or the image intensity of the targets. It can handle non-linearity in the sensor data due to state space discretization and the use of frequency/pixel selection matrices. Furthermore, for a multi-target case with J targets, the JSDLF approach only requires discretization in a single-target state space, instead of discretization in a J-target state space, as in the case of the generalized likelihood ratio test (GLRT) or the maximum likelihood estimator (MLE). Numerical examples are provided to demonstrate that the proposed JSDLF approach achieves excellent performance with near real-time accurate target position and velocity estimates.

  15. Performance Evaluation of EnKF-based Hydrogeological Site Characterization using Color Coherent Vectors

    NASA Astrophysics Data System (ADS)

    Moslehi, M.; de Barros, F.

    2017-12-01

    Complexity of hydrogeological systems arises from the multi-scale heterogeneity and insufficient measurements of their underlying parameters such as hydraulic conductivity and porosity. An inadequate characterization of hydrogeological properties can significantly decrease the trustworthiness of numerical models that predict groundwater flow and solute transport. Therefore, a variety of data assimilation methods have been proposed in order to estimate hydrogeological parameters from spatially scarce data by incorporating the governing physical models. In this work, we propose a novel framework for evaluating the performance of these estimation methods. We focus on the Ensemble Kalman Filter (EnKF) approach that is a widely used data assimilation technique. It reconciles multiple sources of measurements to sequentially estimate model parameters such as the hydraulic conductivity. Several methods have been used in the literature to quantify the accuracy of the estimations obtained by EnKF, including Rank Histograms, RMSE and Ensemble Spread. However, these commonly used methods do not regard the spatial information and variability of geological formations. This can cause hydraulic conductivity fields with very different spatial structures to have similar histograms or RMSE. We propose a vision-based approach that can quantify the accuracy of estimations by considering the spatial structure embedded in the estimated fields. Our new approach consists of adapting a new metric, Color Coherent Vectors (CCV), to evaluate the accuracy of estimated fields achieved by EnKF. CCV is a histogram-based technique for comparing images that incorporate spatial information. We represent estimated fields as digital three-channel images and use CCV to compare and quantify the accuracy of estimations. The sensitivity of CCV to spatial information makes it a suitable metric for assessing the performance of spatial data assimilation techniques. Under various factors of data assimilation methods such as number, layout, and type of measurements, we compare the performance of CCV with other metrics such as RMSE. By simulating hydrogeological processes using estimated and true fields, we observe that CCV outperforms other existing evaluation metrics.

  16. An EM-based semi-parametric mixture model approach to the regression analysis of competing-risks data.

    PubMed

    Ng, S K; McLachlan, G J

    2003-04-15

    We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.

  17. Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout.

    PubMed

    Das, Anup; Pradhapan, Paruthi; Groenendaal, Willemijn; Adiraju, Prathyusha; Rajan, Raj Thilak; Catthoor, Francky; Schaafsma, Siebren; Krichmar, Jeffrey L; Dutt, Nikil; Van Hoof, Chris

    2018-03-01

    Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Autonomous optical navigation using nanosatellite-class instruments: a Mars approach case study

    NASA Astrophysics Data System (ADS)

    Enright, John; Jovanovic, Ilija; Kazemi, Laila; Zhang, Harry; Dzamba, Tom

    2018-02-01

    This paper examines the effectiveness of small star trackers for orbital estimation. Autonomous optical navigation has been used for some time to provide local estimates of orbital parameters during close approach to celestial bodies. These techniques have been used extensively on spacecraft dating back to the Voyager missions, but often rely on long exposures and large instrument apertures. Using a hyperbolic Mars approach as a reference mission, we present an EKF-based navigation filter suitable for nanosatellite missions. Observations of Mars and its moons allow the estimator to correct initial errors in both position and velocity. Our results show that nanosatellite-class star trackers can produce good quality navigation solutions with low position (<300 {m}) and velocity (<0.15 {m/s}) errors as the spacecraft approaches periapse.

  19. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    PubMed

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Hubble Space Telescope Angular Velocity Estimation During the Robotic Servicing Mission

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Sanner, Robert M.

    2005-01-01

    In 2004 NASA began investigation of a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would require estimates of the HST attitude and rates in order to achieve a capture by the proposed Hubble robotic vehicle (HRV). HRV was to be equipped with vision-based sensors, capable of estimating the relative attitude between HST and HRV. The inertial HST attitude is derived from the measured relative attitude and the HRV computed inertial attitude. However, the relative rate between HST and HRV cannot be measured directly. Therefore, the HST rate with respect to inertial space is not known. Two approaches are developed to estimate the HST rates. Both methods utilize the measured relative attitude and the HRV inertial attitude and rates. First, a nonlinear estimator is developed. The nonlinear approach estimates the HST rate through an estimation of the inertial angular momentum. The development includes an analysis of the estimator stability given errors in the measured attitude. Second, a linearized approach is developed. The linearized approach is a pseudo-linear Kalman filter. Simulation test results for both methods are given, including scenarios with erroneous measured attitudes. Even though the development began as an application for the HST robotic servicing mission, the methods presented are applicable to any rendezvous/capture mission involving a non-cooperative target spacecraft.

  1. Fuzzy logic modeling of high performance rechargeable batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, P.; Fennie, C. Jr.; Reisner, D.E.

    1998-07-01

    Accurate battery state-of-charge (SOC) measurements are critical in many portable electronic device applications. Yet conventional techniques for battery SOC estimation are limited in their accuracy, reliability, and flexibility. In this paper the authors present a powerful new approach to estimate battery SOC using a fuzzy logic-based methodology. This approach provides a universally applicable, accurate method for battery SOC estimation either integrated within, or as an external monitor to, an electronic device. The methodology is demonstrated in modeling impedance measurements on Ni-MH cells and discharge voltage curves of Li-ion cells.

  2. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    NASA Astrophysics Data System (ADS)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.

  3. Quantum chemical approach to estimating the thermodynamics of metabolic reactions.

    PubMed

    Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán

    2014-11-12

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism.

  4. Estimation of under-reporting in epidemics using approximations.

    PubMed

    Gamado, Kokouvi; Streftaris, George; Zachary, Stan

    2017-06-01

    Under-reporting in epidemics, when it is ignored, leads to under-estimation of the infection rate and therefore of the reproduction number. In the case of stochastic models with temporal data, a usual approach for dealing with such issues is to apply data augmentation techniques through Bayesian methodology. Departing from earlier literature approaches implemented using reversible jump Markov chain Monte Carlo (RJMCMC) techniques, we make use of approximations to obtain faster estimation with simple MCMC. Comparisons among the methods developed here, and with the RJMCMC approach, are carried out and highlight that approximation-based methodology offers useful alternative inference tools for large epidemics, with a good trade-off between time cost and accuracy.

  5. The implementation of contour-based object orientation estimation algorithm in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery

    2016-10-01

    This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  6. The Pathogen- and Incidence-Based DALY Approach: An Appropriated Methodology for Estimating the Burden of Infectious Diseases

    PubMed Central

    Mangen, Marie-Josée J.; Plass, Dietrich; Havelaar, Arie H.; Gibbons, Cheryl L.; Cassini, Alessandro; Mühlberger, Nikolai; van Lier, Alies; Haagsma, Juanita A.; Brooke, R. John; Lai, Taavi; de Waure, Chiara; Kramarz, Piotr; Kretzschmar, Mirjam E. E.

    2013-01-01

    In 2009, the European Centre for Disease Prevention and Control initiated the ‘Burden of Communicable Diseases in Europe (BCoDE)’ project to generate evidence-based and comparable burden-of-disease estimates of infectious diseases in Europe. The burden-of-disease metric used was the Disability-Adjusted Life Year (DALY), composed of years of life lost due to premature death (YLL) and due to disability (YLD). To better represent infectious diseases, a pathogen-based approach was used linking incident cases to sequelae through outcome trees. Health outcomes were included if an evidence-based causal relationship between infection and outcome was established. Life expectancy and disability weights were taken from the Global Burden of Disease Study and alternative studies. Disease progression parameters were based on literature. Country-specific incidence was based on surveillance data corrected for underestimation. Non-typhoidal Salmonella spp. and Campylobacter spp. were used for illustration. Using the incidence- and pathogen-based DALY approach the total burden for Salmonella spp. and Campylobacter spp. was estimated at 730 DALYs and at 1,780 DALYs per year in the Netherlands (average of 2005–2007). Sequelae accounted for 56% and 82% of the total burden of Salmonella spp. and Campylobacter spp., respectively. The incidence- and pathogen-based DALY methodology allows in the case of infectious diseases a more comprehensive calculation of the disease burden as subsequent sequelae are fully taken into account. Not considering subsequent sequelae would strongly underestimate the burden of infectious diseases. Estimates can be used to support prioritisation and comparison of infectious diseases and other health conditions, both within a country and between countries. PMID:24278167

  7. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  8. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  9. A tiered approach for integrating exposure and dosimetry with ...

    EPA Pesticide Factsheets

    High-throughput (HT) risk screening approaches apply in vitro dose-response data to estimate potential health risks that arise from exposure to chemicals. However, much uncertainty is inherent in relating bioactivities observed in an in vitro system to the perturbations of biological mechanisms that lead to apical adverse health outcomes in living organisms. The chemical-agnostic Adverse Outcome Pathway (AOP) framework addresses this uncertainty by acting as a scaffold onto which pathway-based data can be arranged to aid in the understanding of in vitro toxicity testing results. In addition, risk estimation also requires reconciling chemical concentrations sufficient to produce bioactivity in vitro with concentrations that trigger a molecular initiating event (MIE) at the relevant biological target in vivo. Such target site exposures (TSEs) can be estimated using computational models to integrate exposure information with a chemical’s absorption, distribution, metabolism, and elimination (ADME) processes. In this presentation, the utility of a tiered approach for integrating exposure, ADME, and hazard into risk-based decision making will be demonstrated using several case studies, along with the investigation of how uncertainties in exposure and ADME might impact risk estimates. These case studies involve 1) identifying and prioritizing chemicals capable of altering biological pathways based on their potential to reach an in vivo target; 2) evaluating the infl

  10. Replica and extreme-value analysis of the Jarzynski free-energy estimator

    NASA Astrophysics Data System (ADS)

    Palassini, Matteo; Ritort, Felix

    2008-03-01

    We analyze the Jarzynski estimator of free-energy differences from nonequilibrium work measurements. By a simple mapping onto Derrida's Random Energy Model, we obtain a scaling limit for the expectation of the bias of the estimator. We then derive analytical approximations in three different regimes of the scaling parameter x = log(N)/W, where N is the number of measurements and W the mean dissipated work. Our approach is valid for a generic distribution of the dissipated work, and is based on a replica symmetry breaking scheme for x >> 1, the asymptotic theory of extreme value statistics for x << 1, and a direct approach for x near one. The combination of the three analytic approximations describes well Monte Carlo data for the expectation value of the estimator, for a wide range of values of N, from N=1 to large N, and for different work distributions. Based on these results, we introduce improved free-energy estimators and discuss the application to the analysis of experimental data.

  11. Estimating atmospheric visibility using synergy of MODIS data and ground-based observations

    NASA Astrophysics Data System (ADS)

    Komeilian, H.; Mohyeddin Bateni, S.; Xu, T.; Nielson, J.

    2015-05-01

    Dust events are intricate climatic processes, which can have adverse effects on human health, safety, and the environment. In this study, two data mining approaches, namely, back-propagation artificial neural network (BP ANN) and supporting vector regression (SVR), were used to estimate atmospheric visibility through the synergistic use of Moderate Resolution Imaging Spectroradiometer (MODIS) Level 1B (L1B) data and ground-based observations at fourteen stations in the province of Khuzestan (southwestern Iran), during 2009-2010. Reflectance and brightness temperature in different bands (from MODIS) along with in situ meteorological data were input to the models to estimate atmospheric visibility. The results show that both models can accurately estimate atmospheric visibility. The visibility estimates from the BP ANN network had a root-mean-square error (RMSE) and Pearson's correlation coefficient (R) of 0.67 and 0.69, respectively. The corresponding RMSE and R from the SVR model were 0.59 and 0.71, implying that the SVR approach outperforms the BP ANN.

  12. Estimating the Economic Value of National Parks with Count Data Models Using On-Site, Secondary Data: The Case of the Great Sand Dunes National Park and Preserve

    NASA Astrophysics Data System (ADS)

    Heberling, Matthew T.; Templeton, Joshua J.

    2009-04-01

    We estimate an individual travel cost model for Great Sand Dunes National Park and Preserve (GSD) in Colorado using on-site, secondary data. The purpose of the on-site survey was to help the National Park Service better understand the visitors of GSD; it was not intended for a travel cost model. Variables such as travel cost and income were estimated based on respondents’ Zip Codes. Following approaches found in the literature, a negative binomial model corrected for truncation and endogenous stratification fit the data the best. We estimate a recreational benefit of U.S. 89/visitor/year or U.S. 54/visitor/24-h recreational day (in 2002 U.S. ). Based on the approach presented here, there are other data sets for national parks, preserves, and battlefields where travel cost models could be estimated and used to support National Park Service management decisions.

  13. Regression model estimation of early season crop proportions: North Dakota, some preliminary results

    NASA Technical Reports Server (NTRS)

    Lin, K. K. (Principal Investigator)

    1982-01-01

    To estimate crop proportions early in the season, an approach is proposed based on: use of a regression-based prediction equation to obtain an a priori estimate for specific major crop groups; modification of this estimate using current-year LANDSAT and weather data; and a breakdown of the major crop groups into specific crops by regression models. Results from the development and evaluation of appropriate regression models for the first portion of the proposed approach are presented. The results show that the model predicts 1980 crop proportions very well at both county and crop reporting district levels. In terms of planted acreage, the model underpredicted 9.1 percent of the 1980 published data on planted acreage at the county level. It predicted almost exactly the 1980 published data on planted acreage at the crop reporting district level and overpredicted the planted acreage by just 0.92 percent.

  14. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM

    PubMed Central

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090

  15. A Direct Latent Variable Modeling Based Method for Point and Interval Estimation of Coefficient Alpha

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…

  16. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  17. An assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue, Ireland

    NASA Astrophysics Data System (ADS)

    Harrington, Seán T.; Harrington, Joseph R.

    2013-03-01

    This paper presents an assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue in Ireland. The rivers, located in the South of Ireland, are underlain by sandstone, limestones and mudstones, and the catchments are primarily agricultural. A comprehensive database of suspended sediment data is not available for rivers in Ireland. For such situations, it is common to estimate suspended sediment concentrations from the flow rate using the suspended sediment rating curve approach. These rating curves are most commonly constructed by applying linear regression to the logarithms of flow and suspended sediment concentration or by applying a power curve to normal data. Both methods are assessed in this paper for the Rivers Bandon and Owenabue. Turbidity-based suspended sediment loads are presented for each river based on continuous (15 min) flow data and the use of turbidity as a surrogate for suspended sediment concentration is investigated. A database of paired flow rate and suspended sediment concentration values, collected between the years 2004 and 2011, is used to generate rating curves for each river. From these, suspended sediment load estimates using the rating curve approach are estimated and compared to the turbidity based loads for each river. Loads are also estimated using stage and seasonally separated rating curves and daily flow data, for comparison purposes. The most accurate load estimate on the River Bandon is found using a stage separated power curve, while the most accurate load estimate on the River Owenabue is found using a general power curve. Maximum full monthly errors of - 76% to + 63% are found on the River Bandon with errors of - 65% to + 359% found on the River Owenabue. The average monthly error on the River Bandon is - 12% with an average error of + 87% on the River Owenabue. The use of daily flow data in the load estimation process does not result in a significant loss of accuracy on either river. Historic load estimates (with a 95% confidence interval) were hindcast from the flow record and average annual loads of 7253 ± 673 tonnes on the River Bandon and 1935 ± 325 tonnes on the River Owenabue were estimated to be passing the gauging stations.

  18. Spectrally based bathymetric mapping of a dynamic, sand‐bedded channel: Niobrara River, Nebraska, USA

    USGS Publications Warehouse

    Dilbone, Elizabeth; Legleiter, Carl; Alexander, Jason S.; McElroy, Brandon

    2018-01-01

    Methods for spectrally based mapping of river bathymetry have been developed and tested in clear‐flowing, gravel‐bed channels, with limited application to turbid, sand‐bed rivers. This study used hyperspectral images and field surveys from the dynamic, sandy Niobrara River to evaluate three depth retrieval methods. The first regression‐based approach, optimal band ratio analysis (OBRA), paired in situ depth measurements with image pixel values to estimate depth. The second approach used ground‐based field spectra to calibrate an OBRA relationship. The third technique, image‐to‐depth quantile transformation (IDQT), estimated depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image‐derived variable. OBRA yielded the lowest depth retrieval mean error (0.005 m) and highest observed versus predicted R2 (0.817). Although misalignment between field and image data did not compromise the performance of OBRA in this study, poor georeferencing could limit regression‐based approaches such as OBRA in dynamic, sand‐bedded rivers. Field spectroscopy‐based depth maps exhibited a mean error with a slight shallow bias (0.068 m) but provided reliable estimates for most of the study reach. IDQT had a strong deep bias but provided informative relative depth maps. Overprediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the depth CDF. Although each of the techniques we tested demonstrated potential to provide accurate depth estimates in sand‐bed rivers, each method also was subject to certain constraints and limitations.

  19. Adaptive Modal Identification for Flutter Suppression Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Drew, Michael; Swei, Sean S.

    2016-01-01

    In this paper, we will develop an adaptive modal identification method for identifying the frequencies and damping of a flutter mode based on model-reference adaptive control (MRAC) and least-squares methods. The least-squares parameter estimation will achieve parameter convergence in the presence of persistent excitation whereas the MRAC parameter estimation does not guarantee parameter convergence. Two adaptive flutter suppression control approaches are developed: one based on MRAC and the other based on the least-squares method. The MRAC flutter suppression control is designed as an integral part of the parameter estimation where the feedback signal is used to estimate the modal information. On the other hand, the separation principle of control and estimation is applied to the least-squares method. The least-squares modal identification is used to perform parameter estimation.

  20. Application of the quantum spin glass theory to image restoration.

    PubMed

    Inoue, J I

    2001-04-01

    Quantum fluctuation is introduced into the Markov random-field model for image restoration in the context of a Bayesian approach. We investigate the dependence of the quantum fluctuation on the quality of a black and white image restoration by making use of statistical mechanics. We find that the maximum posterior marginal (MPM) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posteriori estimate or the thermal fluctuation based MPM estimate.

  1. Temperature dependence of strain energy and thermodynamic properties of V2 O5 -based single-walled nanotubes: Zone-folding approach.

    PubMed

    Porsev, Vitaly V; Bandura, Andrei V; Evarestov, Robert A

    2016-06-15

    A zone-folding approach is applied to estimate the thermodynamic properties of V2 O5 -based nanotubes. The results obtained are compared with those from the direct calculations. It is shown that the zone-folding approximation allows an accurate estimation of nanotube thermodynamic properties and gives a gain in computation time compared to their direct calculations. Both approaches show that temperature effects do not change the relative stability of V2 O5 free layers and nanotubes derived from the α- and γ-phase. The internal energy thermal contributions into the strain energy of nanotubes are small and can be ignored. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Total variation approach for adaptive nonuniformity correction in focal-plane arrays.

    PubMed

    Vera, Esteban; Meza, Pablo; Torres, Sergio

    2011-01-15

    In this Letter we propose an adaptive scene-based nonuniformity correction method for fixed-pattern noise removal in imaging arrays. It is based on the minimization of the total variation of the estimated irradiance, and the resulting function is optimized by an isotropic total variation approach making use of an alternating minimization strategy. The proposed method provides enhanced results when applied to a diverse set of real IR imagery, accurately estimating the nonunifomity parameters of each detector in the focal-plane array at a fast convergence rate, while also forming fewer ghosting artifacts.

  3. Forecasting peaks of seasonal influenza epidemics.

    PubMed

    Nsoesie, Elaine; Mararthe, Madhav; Brownstein, John

    2013-06-21

    We present a framework for near real-time forecast of influenza epidemics using a simulation optimization approach. The method combines an individual-based model and a simple root finding optimization method for parameter estimation and forecasting. In this study, retrospective forecasts were generated for seasonal influenza epidemics using web-based estimates of influenza activity from Google Flu Trends for 2004-2005, 2007-2008 and 2012-2013 flu seasons. In some cases, the peak could be forecasted 5-6 weeks ahead. This study adds to existing resources for influenza forecasting and the proposed method can be used in conjunction with other approaches in an ensemble framework.

  4. New screening approach for risk assessment of pesticides in ambient air

    NASA Astrophysics Data System (ADS)

    Yusà, Vicent; Coscollà, Clara; Millet, Maurice

    2014-10-01

    We present a novel screening approach for inhalation risk assessment of currently used pesticides (CUPs) in ambient air, based on the measurements of pesticide levels in the inhalable fraction of the particulate matter (PM10). Total concentrations in ambient air (gas + particle phases) were estimated using a theoretical model of distribution of semi-volatile organic compounds between the gas and the particulate phase based on the octanol-air partition (Koa) of each pesticide. The proposed approach was used in a pilot study conducted in a rural station in Valencia (Spain) from April through to October 2010. Twenty out of 82 analysed pesticides were detected in average concentrations ranging from 1.63 to 117.01 pg m-3. For adults, children and infants the estimated chronic inhalation risk, expressed as Hazard Quotient (HQ) was <1 for all pesticides. Likewise, the cumulative exposure for detected organophosphorus, pyrethroids and carbamates pesticides, was estimated using as metrics the Hazard Index (HI), which was less than 1 for the three families of pesticides assessed. The cancer risk estimated for the detected pesticides classified as Likely or Possible carcinogens was less than 1.15E-7 for infants. In our opinion, the screening approach proposed could be used in the monitoring and risk assessment of pesticides in ambient air.

  5. Incorrect interpretation of carbon mass balance biases global vegetation fire emission estimates.

    PubMed

    Surawski, N C; Sullivan, A L; Roxburgh, S H; Meyer, C P Mick; Polglase, P J

    2016-05-05

    Vegetation fires are a complex phenomenon in the Earth system with many global impacts, including influences on global climate. Estimating carbon emissions from vegetation fires relies on a carbon mass balance technique that has evolved with two different interpretations. Databases of global vegetation fire emissions use an approach based on 'consumed biomass', which is an approximation to the biogeochemically correct 'burnt carbon' approach. Here we show that applying the 'consumed biomass' approach to global emissions from vegetation fires leads to annual overestimates of carbon emitted to the atmosphere by 4.0% or 100 Tg compared with the 'burnt carbon' approach. The required correction is significant and represents ∼9% of the net global forest carbon sink estimated annually. Vegetation fire emission studies should use the 'burnt carbon' approach to quantify and understand the role of this burnt carbon, which is not emitted to the atmosphere, as a sink enriched in carbon.

  6. An Integrated Optimal Estimation Approach to Spitzer Space Telescope Focal Plane Survey

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kang, Bryan H.; Brugarolas, Paul B.; Boussalis, D.

    2004-01-01

    This paper discusses an accurate and efficient method for focal plane survey that was used for the Spitzer Space Telescope. The approach is based on using a high-order 37-state Instrument Pointing Frame (IPF) Kalman filter that combines both engineering parameters and science parameters into a single filter formulation. In this approach, engineering parameters such as pointing alignments, thermomechanical drift and gyro drifts are estimated along with science parameters such as plate scales and optical distortions. This integrated approach has many advantages compared to estimating the engineering and science parameters separately. The resulting focal plane survey approach is applicable to a diverse range of science instruments such as imaging cameras, spectroscopy slits, and scanning-type arrays alike. The paper will summarize results from applying the IPF Kalman Filter to calibrating the Spitzer Space Telescope focal plane, containing the MIPS, IRAC, and the IRS science Instrument arrays.

  7. Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment

    PubMed Central

    DeBlasio, Dan

    2013-01-01

    Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379

  8. Mean Recency Period for Estimation of HIV-1 Incidence with the BED-Capture EIA and Bio-Rad Avidity in Persons Diagnosed in the United States with Subtype B.

    PubMed

    Hanson, Debra L; Song, Ruiguang; Masciotra, Silvina; Hernandez, Angela; Dobbs, Trudy L; Parekh, Bharat S; Owen, S Michele; Green, Timothy A

    2016-01-01

    HIV incidence estimates are used to monitor HIV-1 infection in the United States. Use of laboratory biomarkers that distinguish recent from longstanding infection to quantify HIV incidence rely on having accurate knowledge of the average time that individuals spend in a transient state of recent infection between seroconversion and reaching a specified biomarker cutoff value. This paper describes five estimation procedures from two general statistical approaches, a survival time approach and an approach that fits binomial models of the probability of being classified as recently infected, as a function of time since seroconversion. We compare these procedures for estimating the mean duration of recent infection (MDRI) for two biomarkers used by the U.S. National HIV Surveillance System for determination of HIV incidence, the Aware BED EIA HIV-1 incidence test (BED) and the avidity-based, modified Bio-Rad HIV-1/HIV-2 plus O ELISA (BRAI) assay. Collectively, 953 specimens from 220 HIV-1 subtype B seroconverters, taken from 5 cohorts, were tested with a biomarker assay. Estimates of MDRI using the non-parametric survival approach were 198.4 days (SD 13.0) for BED and 239.6 days (SD 13.9) for BRAI using cutoff values of 0.8 normalized optical density and 30%, respectively. The probability of remaining in the recent state as a function of time since seroconversion, based upon this revised statistical approach, can be applied in the calculation of annual incidence in the United States.

  9. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2013-01-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  10. The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.

    PubMed

    Liu, Chunping; Laporte, Audrey; Ferguson, Brian S

    2008-09-01

    In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.

  11. Reconciling divergent estimates of oil and gas methane emissions

    PubMed Central

    Zavala-Araiza, Daniel; Lyon, David R.; Alvarez, Ramón A.; Davis, Kenneth J.; Harriss, Robert; Herndon, Scott C.; Karion, Anna; Kort, Eric Adam; Lamb, Brian K.; Lan, Xin; Marchese, Anthony J.; Pacala, Stephen W.; Robinson, Allen L.; Shepson, Paul B.; Sweeney, Colm; Talbot, Robert; Townsend-Small, Amy; Yacovitch, Tara I.; Zimmerle, Daniel J.; Hamburg, Steven P.

    2015-01-01

    Published estimates of methane emissions from atmospheric data (top-down approaches) exceed those from source-based inventories (bottom-up approaches), leading to conflicting claims about the climate implications of fuel switching from coal or petroleum to natural gas. Based on data from a coordinated campaign in the Barnett Shale oil and gas-producing region of Texas, we find that top-down and bottom-up estimates of both total and fossil methane emissions agree within statistical confidence intervals (relative differences are 10% for fossil methane and 0.1% for total methane). We reduced uncertainty in top-down estimates by using repeated mass balance measurements, as well as ethane as a fingerprint for source attribution. Similarly, our bottom-up estimate incorporates a more complete count of facilities than past inventories, which omitted a significant number of major sources, and more effectively accounts for the influence of large emission sources using a statistical estimator that integrates observations from multiple ground-based measurement datasets. Two percent of oil and gas facilities in the Barnett accounts for half of methane emissions at any given time, and high-emitting facilities appear to be spatiotemporally variable. Measured oil and gas methane emissions are 90% larger than estimates based on the US Environmental Protection Agency’s Greenhouse Gas Inventory and correspond to 1.5% of natural gas production. This rate of methane loss increases the 20-y climate impacts of natural gas consumed in the region by roughly 50%. PMID:26644584

  12. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  13. A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.

    PubMed

    Tipton, Elizabeth; Shuster, Jonathan

    2017-10-15

    Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks

    PubMed Central

    Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan

    2016-01-01

    To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning. PMID:27608019

  15. Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks.

    PubMed

    Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan

    2016-09-05

    To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning.

  16. Structure-Based Druggability Assessment of the Mammalian Structural Proteome with Inclusion of Light Protein Flexibility

    PubMed Central

    Loving, Kathryn A.; Lin, Andy; Cheng, Alan C.

    2014-01-01

    Advances reported over the last few years and the increasing availability of protein crystal structure data have greatly improved structure-based druggability approaches. However, in practice, nearly all druggability estimation methods are applied to protein crystal structures as rigid proteins, with protein flexibility often not directly addressed. The inclusion of protein flexibility is important in correctly identifying the druggability of pockets that would be missed by methods based solely on the rigid crystal structure. These include cryptic pockets and flexible pockets often found at protein-protein interaction interfaces. Here, we apply an approach that uses protein modeling in concert with druggability estimation to account for light protein backbone movement and protein side-chain flexibility in protein binding sites. We assess the advantages and limitations of this approach on widely-used protein druggability sets. Applying the approach to all mammalian protein crystal structures in the PDB results in identification of 69 proteins with potential druggable cryptic pockets. PMID:25079060

  17. Estimation of census and effective population sizes: the increasing usefulness of DNA-based approaches

    Treesearch

    Gordon Luikart; Nils Ryman; David A. Tallmon; Michael K. Schwartz; Fred W. Allendorf

    2010-01-01

    Population census size (NC) and effective population sizes (Ne) are two crucial parameters that influence population viability, wildlife management decisions, and conservation planning. Genetic estimators of both NC and Ne are increasingly widely used because molecular markers are increasingly available, statistical methods are improving rapidly, and genetic estimators...

  18. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  19. Annual forest inventory estimates based on the moving average

    Treesearch

    Francis A. Roesch; James R. Steinman; Michael T. Thompson

    2002-01-01

    Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...

  20. Investigations of potential bias in the estimation of lambda using Pradel's (1996) model for capture-recapture data

    USGS Publications Warehouse

    Hines, James E.; Nichols, James D.

    2002-01-01

    Pradel's (1996) temporal symmetry model permitting direct estimation and modelling of population growth rate, u i , provides a potentially useful tool for the study of population dynamics using marked animals. Because of its recent publication date, the approach has not seen much use, and there have been virtually no investigations directed at robustness of the resulting estimators. Here we consider several potential sources of bias, all motivated by specific uses of this estimation approach. We consider sampling situations in which the study area expands with time and present an analytic expression for the bias in u i We next consider trap response in capture probabilities and heterogeneous capture probabilities and compute large-sample and simulation-based approximations of resulting bias in u i . These approximations indicate that trap response is an especially important assumption violation that can produce substantial bias. Finally, we consider losses on capture and emphasize the importance of selecting the estimator for u i that is appropriate to the question being addressed. For studies based on only sighting and resighting data, Pradel's (1996) u i ' is the appropriate estimator.

  1. Comparison of emission estimates for non-CO2 greenhouse gases from livestock and poultry in Korea from 1990 to 2010.

    PubMed

    Paik, Chunhyun; Chung, Yongjoo; Kim, Hugon; Kim, Young Jin

    2016-04-01

    It has often been claimed that non-carbon dioxide greenhouse gases (NCGGs), such as methane, nitrous oxide and fluorinated greenhouse gases, are significant contributors to climate change. Here we nvestigate emission estimates of methane and nitrous oxide from livestock and poultry production, which is recognized as a major source of those NCGGs, in Korea over the period of 1990 through 2010. Based on the data on livestock and poultry populations, emission estimates of methane and nitrous oxide are first derived based on the Tier 1 approach. Then, the Tier 2 approach is adopted to obtain emission estimates of methane and nitrous oxide from cattle, which are known to be the largest sources of these NCGGs and account for about 70% of emissions from livestock and poultry in Korea. The result indicates that the Tier 2 estimates of methane and nitrous oxide emissions from enteric fermentation and manure management are significantly different from the Tier 1 estimates over the analysis period. © 2015 Japanese Society of Animal Science.

  2. Battery state-of-charge estimation using approximate least squares

    NASA Astrophysics Data System (ADS)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  3. Bayesian Estimation of Combined Accuracy for Tests with Verification Bias

    PubMed Central

    Broemeling, Lyle D.

    2011-01-01

    This presentation will emphasize the estimation of the combined accuracy of two or more tests when verification bias is present. Verification bias occurs when some of the subjects are not subject to the gold standard. The approach is Bayesian where the estimation of test accuracy is based on the posterior distribution of the relevant parameter. Accuracy of two combined binary tests is estimated employing either “believe the positive” or “believe the negative” rule, then the true and false positive fractions for each rule are computed for two tests. In order to perform the analysis, the missing at random assumption is imposed, and an interesting example is provided by estimating the combined accuracy of CT and MRI to diagnose lung cancer. The Bayesian approach is extended to two ordinal tests when verification bias is present, and the accuracy of the combined tests is based on the ROC area of the risk function. An example involving mammography with two readers with extreme verification bias illustrates the estimation of the combined test accuracy for ordinal tests. PMID:26859487

  4. DNA-based approach to aging martens (Martes americana and M. caurina)

    Treesearch

    Jonathan N. Pauli; John P. Whiteman; Bruce G. Marcot; Terry M. McClean; Merav Ben-David

    2011-01-01

    Demographic structure is central to understanding the dynamics of animal populations. However, determining the age of free-ranging mammals is difficult, and currently impossible when sampling with noninvasive, genetic-based approaches. We present a method to estimate age class by combining measures of telomere lengths with other biologically meaningful covariates in a...

  5. The Thin Border between Light and Shadow

    ERIC Educational Resources Information Center

    Guglielmino, M.; Gratton, L. M.; Oss, S.

    2010-01-01

    We propose a simple, direct estimate of the Sun's diameter based on penumbra observation and measurement in a two-level approach, the first for middle-school pupils and making use of simple geometrical arguments, the second more appropriate to high-school students and based on a slightly more sophisticated approach. (Contains 5 figures.)

  6. A new combined approach on Hurst exponent estimate and its applications in realized volatility

    NASA Astrophysics Data System (ADS)

    Luo, Yi; Huang, Yirong

    2018-02-01

    The purpose of this paper is to propose a new estimator of Hurst exponent based on the combined information of the conventional rescaled range methods. We demonstrate the superiority of the proposed estimator by Monte Carlo simulations, and the applications in estimating the Hurst exponent of daily volatility series in Chinese stock market. Moreover, we indicate the impact of the type of estimator and structural break on the estimating results of Hurst exponent.

  7. Regional estimation of base recharge to ground water using water balance and a base-flow index.

    PubMed

    Szilagyi, Jozsef; Harvey, F Edwin; Ayers, Jerry F

    2003-01-01

    Naturally occurring long-term mean annual base recharge to ground water in Nebraska was estimated with the help of a water-balance approach and an objective automated technique for base-flow separation involving minimal parameter-optimization requirements. Base recharge is equal to total recharge minus the amount of evapotranspiration coming directly from ground water. The estimation of evapotranspiration in the water-balance equation avoids the need to specify a contributing drainage area for ground water, which in certain cases may be considerably different from the drainage area for surface runoff. Evapotranspiration was calculated by the WREVAP model at the Solar and Meteorological Surface Observation Network (SAMSON) sites. Long-term mean annual base recharge was derived by determining the product of estimated long-term mean annual runoff (the difference between precipitation and evapotranspiration) and the base-flow index (BFI). The BFI was calculated from discharge data obtained from the U.S. Geological Survey's gauging stations in Nebraska. Mapping was achieved by using geographic information systems (GIS) and geostatistics. This approach is best suited for regional-scale applications. It does not require complex hydrogeologic modeling nor detailed knowledge of soil characteristics, vegetation cover, or land-use practices. Long-term mean annual base recharge rates in excess of 110 mm/year resulted in the extreme eastern part of Nebraska. The western portion of the state expressed rates of only 15 to 20 mm annually, while the Sandhills region of north-central Nebraska was estimated to receive twice as much base recharge (40 to 50 mm/year) as areas south of it.

  8. Cloud cover and solar disk state estimation using all-sky images: deep neural networks approach compared to routine methods

    NASA Astrophysics Data System (ADS)

    Krinitskiy, Mikhail; Sinitsyn, Alexey

    2017-04-01

    Shortwave radiation is an important component of surface heat budget over sea and land. To estimate them accurate observations of cloud conditions are needed including total cloud cover, spatial and temporal cloud structure. While massively observed visually, for building accurate SW radiation parameterizations cloud structure needs also to be quantified using precise instrumental measurements. While there already exist several state of the art land-based cloud-cameras that satisfy researchers needs, their major disadvantages are associated with inaccuracy of all-sky images processing algorithms which typically result in the uncertainties of 2-4 octa of cloud cover estimates with the resulting true-scoring cloud cover accuracy of about 7%. Moreover, none of these algorithms determine cloud types. We developed an approach for cloud cover and structure estimating, which provides much more accurate estimates and also allows for measuring additional characteristics. This method is based on the synthetic controlling index, namely the "grayness rate index", that we introduced in 2014. Since then this index has already demonstrated high efficiency being used along with the technique namely the "background sunburn effect suppression", to detect thin clouds. This made it possible to significantly increase the accuracy of total cloud cover estimation in various sky image states using this extension of routine algorithm type. Errors for the cloud cover estimates significantly decreased down resulting the mean squared error of about 1.5 octa. Resulting true-scoring accuracy is more than 38%. The main source of this approach uncertainties is the solar disk state determination errors. While the deep neural networks approach lets us to estimate solar disk state with 94% accuracy, the final result of total cloud estimation still isn`t satisfying. To solve this problem completely we applied the set of machine learning algorithms to the problem of total cloud cover estimation directly. The accuracy of this approach varies depending on algorithm choice. Deep neural networks demonstrated the best accuracy of more than 96%. We will demonstrate some approaches and the most influential statistical features of all-sky images that lets the algorithm reach that high accuracy. With the use of our new optical package a set of over 480`000 samples has been collected in several sea missions in 2014-2016 along with concurrent standard human observed and instrumentally recorded meteorological parameters. We will demonstrate the results of the field measurements and will discuss some still remaining problems and the potential of the further developments of machine learning approach.

  9. Using a multinomial tree model for detecting mixtures in perceptual detection

    PubMed Central

    Chechile, Richard A.

    2014-01-01

    In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741

  10. Immunity-Based Optimal Estimation Approach for a New Real Time Group Elevator Dynamic Control Application for Energy and Time Saving

    PubMed Central

    Baygin, Mehmet; Karakose, Mehmet

    2013-01-01

    Nowadays, the increasing use of group elevator control systems owing to increasing building heights makes the development of high-performance algorithms necessary in terms of time and energy saving. Although there are many studies in the literature about this topic, they are still not effective enough because they are not able to evaluate all features of system. In this paper, a new approach of immune system-based optimal estimate is studied for dynamic control of group elevator systems. The method is mainly based on estimation of optimal way by optimizing all calls with genetic, immune system and DNA computing algorithms, and it is evaluated with a fuzzy system. The system has a dynamic feature in terms of the situation of calls and the option of the most appropriate algorithm, and it also adaptively works in terms of parameters such as the number of floors and cabins. This new approach which provides both time and energy saving was carried out in real time. The experimental results comparatively demonstrate the effects of method. With dynamic and adaptive control approach in this study carried out, a significant progress on group elevator control systems has been achieved in terms of time and energy efficiency according to traditional methods. PMID:23935433

  11. Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.

    PubMed

    Azzari, Lucio; Foi, Alessandro

    2014-08-01

    We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.

  12. Flaw depth sizing using guided waves

    NASA Astrophysics Data System (ADS)

    Cobb, Adam C.; Fisher, Jay L.

    2016-02-01

    Guided wave inspection technology is most often applied as a survey tool for pipeline inspection, where relatively low frequency ultrasonic waves, compared to those used in conventional ultrasonic nondestructive evaluation (NDE) methods, propagate along the structure; discontinuities cause a reflection of the sound back to the sensor for flaw detection. Although the technology can be used to accurately locate a flaw over long distances, the flaw sizing performance, especially for flaw depth estimation, is much poorer than other, local NDE approaches. Estimating flaw depth, as opposed to other parameters, is of particular interest for failure analysis of many structures. At present, most guided wave technologies estimate the size of the flaw based on the reflected signal amplitude from the flaw compared to a known geometry reflection, such as a circumferential weld in a pipeline. This process, however, requires many assumptions to be made, such as weld geometry and flaw shape. Furthermore, it is highly dependent on the amplitude of the flaw reflection, which can vary based on many factors, such as attenuation and sensor installation. To improve sizing performance, especially depth estimation, and do so in a way that is not strictly amplitude dependent, this paper describes an approach to estimate the depth of a flaw based on a multimodal analysis. This approach eliminates the need of using geometric reflections for calibration and can be used for both pipeline and plate inspection applications. To verify the approach, a test set was manufactured on plate specimens with flaws of different widths and depths ranging from 5% to 100% of total wall thickness; 90% of these flaws were sized to within 15% of their true value. A description of the initial multimodal sizing strategy and results will be discussed.

  13. Modeling the uncertainty of estimating forest carbon stocks in China

    NASA Astrophysics Data System (ADS)

    Yue, T. X.; Wang, Y. F.; Du, Z. P.; Zhao, M. W.; Zhang, L. L.; Zhao, N.; Lu, M.; Larocque, G. R.; Wilson, J. P.

    2015-12-01

    Earth surface systems are controlled by a combination of global and local factors, which cannot be understood without accounting for both the local and global components. The system dynamics cannot be recovered from the global or local controls alone. Ground forest inventory is able to accurately estimate forest carbon stocks at sample plots, but these sample plots are too sparse to support the spatial simulation of carbon stocks with required accuracy. Satellite observation is an important source of global information for the simulation of carbon stocks. Satellite remote-sensing can supply spatially continuous information about the surface of forest carbon stocks, which is impossible from ground-based investigations, but their description has considerable uncertainty. In this paper, we validated the Lund-Potsdam-Jena dynamic global vegetation model (LPJ), the Kriging method for spatial interpolation of ground sample plots and a satellite-observation-based approach as well as an approach for fusing the ground sample plots with satellite observations and an assimilation method for incorporating the ground sample plots into LPJ. The validation results indicated that both the data fusion and data assimilation approaches reduced the uncertainty of estimating carbon stocks. The data fusion had the lowest uncertainty by using an existing method for high accuracy surface modeling to fuse the ground sample plots with the satellite observations (HASM-SOA). The estimates produced with HASM-SOA were 26.1 and 28.4 % more accurate than the satellite-based approach and spatial interpolation of the sample plots, respectively. Forest carbon stocks of 7.08 Pg were estimated for China during the period from 2004 to 2008, an increase of 2.24 Pg from 1984 to 2008, using the preferred HASM-SOA method.

  14. Cross-Sectional HIV Incidence Estimation in HIV Prevention Research

    PubMed Central

    Brookmeyer, Ron; Laeyendecker, Oliver; Donnell, Deborah; Eshleman, Susan H.

    2013-01-01

    Accurate methods for estimating HIV incidence from cross-sectional samples would have great utility in prevention research. This report describes recent improvements in cross-sectional methods that significantly improve their accuracy. These improvements are based on the use of multiple biomarkers to identify recent HIV infections. These multi-assay algorithms (MAAs) use assays in a hierarchical approach for testing that minimizes the effort and cost of incidence estimation. These MAAs do not require mathematical adjustments for accurate estimation of the incidence rates in study populations in the year prior to sample collection. MAAs provide a practical, accurate, and cost-effective approach for cross-sectional HIV incidence estimation that can be used for HIV prevention research and global epidemic monitoring. PMID:23764641

  15. What controls the maximum magnitude of injection-induced earthquakes?

    NASA Astrophysics Data System (ADS)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum plausible magnitude would clearly be beneficial for quantitative risk assessment of injection-induced seismicity.

  16. Semi-analytical approach to estimate railroad tank car shell puncture

    DOT National Transportation Integrated Search

    2011-03-16

    This paper describes the development of engineering-based equations to estimate the puncture resistance of railroad tank cars under a generalized shell or side impact scenario. Resistance to puncture is considered in terms of puncture velocity, which...

  17. Toward a transport-based analysis of nutrient spiraling and uptake in streams

    USGS Publications Warehouse

    Runkel, Robert L.

    2007-01-01

    Nutrient addition experiments are designed to study the cycling of nutrients in stream ecosystems where hydrologic and nonhydrologic processes determine nutrient fate. Because of the importance of hydrologic processes in stream ecosystems, a conceptual model known as nutrient spiraling is frequently employed. A central part of the nutrient spiraling approach is the determination of uptake length (SW), the average distance traveled by dissolved nutrients in the water column before uptake. Although the nutrient spiraling concept has been an invaluable tool in stream ecology, the current practice of estimating uptake length from steady-state nutrient data using linear regression (called here the "SW approach") presents a number of limitations. These limitations are identified by comparing the exponential SW equation with analytical solutions of a stream solute transport model. This comparison indicates that (1) SW, is an aggregate measure of uptake that does not distinguish between main channel and storage zone processes, (2) SW, is an integrated measure of numerous hydrologie and nonhydrologic processes-this process integration may lead to difficulties in interpretation when comparing estimates of SW, and (3) estimates of uptake velocity and areal uptake rate (Vf and U) based on S W, are not independent of system hydrology. Given these findings, a transport-based approach to nutrient spiraling is presented for steady-state and time-series data sets. The transport-based approach for time-series data sets is suggested for future research on nutrient uptake as it provides a number of benefits, including the ability to (1) separately quantify main channel and storage zone uptake, (2) quantify specific hydrologic and nonhydrologic processes using various model parameters (process separation), (3) estimate uptake velocities and areal uptake rates that are independent of hydrologic effects, and (4) use short-term, non-plateau nutrient additions such that the effects of regeneration and mineralization are minimized. In summary, the transport-based, time-series approach provides a means of estimating traditional measures of nutrient uptake (SW, V?? U) while providing additional information on the location and magnitude of uptake (main channel versus storage zone). Application of the transport-based approach to time-series data from Green Creek, Antarctica, indicates that the bulk of nitrate uptake (???74% to 100%) occurred within the main channel where benthic uptake by algal mats is a likely process. Substantial uptake (???26%) also occurred in the storage zone of one reach, where uptake is attributed to the microbial community.

  18. A new approach to estimate time-to-cure from cancer registries data.

    PubMed

    Boussari, Olayidé; Romain, Gaëlle; Remontet, Laurent; Bossard, Nadine; Mounier, Morgane; Bouvier, Anne-Marie; Binquet, Christine; Colonna, Marc; Jooste, Valérie

    2018-04-01

    Cure models have been adapted to net survival context to provide important indicators from population-based cancer data, such as the cure fraction and the time-to-cure. However existing methods for computing time-to-cure suffer from some limitations. Cure models in net survival framework were briefly overviewed and a new definition of time-to-cure was introduced as the time TTC at which P(t), the estimated covariate-specific probability of being cured at a given time t after diagnosis, reaches 0.95. We applied flexible parametric cure models to data of four cancer sites provided by the French network of cancer registries (FRANCIM). Then estimates of the time-to-cure by TTC and by two existing methods were derived and compared. Cure fractions and probabilities P(t) were also computed. Depending on the age group, TTC ranged from to 8 to 10 years for colorectal and pancreatic cancer and was nearly 12 years for breast cancer. In thyroid cancer patients under 55 years at diagnosis, TTC was strikingly 0: the probability of being cured was >0.95 just after diagnosis. This is an interesting result regarding the health insurance premiums of these patients. The estimated values of time-to-cure from the three approaches were close for colorectal cancer only. We propose a new approach, based on estimated covariate-specific probability of being cured, to estimate time-to-cure. Compared to two existing methods, the new approach seems to be more intuitive and natural and less sensitive to the survival time distribution. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Estimation of corn yield using multi-temporal optical and radar satellite data and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Fieuzal, R.; Marais Sicre, C.; Baup, F.

    2017-05-01

    The yield forecasting of corn constitutes a key issue in agricultural management, particularly in the context of demographic pressure and climate change. This study presents two methods to estimate yields using artificial neural networks: a diagnostic approach based on all the satellite data acquired throughout the agricultural season, and a real-time approach, where estimates are updated after each image was acquired in the microwave and optical domains (Formosat-2, Spot-4/5, TerraSAR-X, and Radarsat-2) throughout the crop cycle. The results are based on the Multispectral Crop Monitoring experimental campaign conducted by the CESBIO (Centre d'Études de la BIOsphère) laboratory in 2010 over an agricultural region in southwestern France. Among the tested sensor configurations (multi-frequency, multi-polarization or multi-source data), the best yield estimation performance (using the diagnostic approach) is obtained with reflectance acquired in the red wavelength region, with a coefficient of determination of 0.77 and an RMSE of 6.6 q ha-1. In the real-time approach the combination of red reflectance and CHH backscattering coefficients provides the best compromise between the accuracy and earliness of the yield estimate (more than 3 months before the harvest), with an R2 of 0.69 and an RMSE of 7.0 q ha-1 during the development of the central stem. The two best yield estimates are similar in most cases (for more than 80% of the monitored fields), and the differences are related to discrepancies in the crop growth cycle and/or the consequences of pests.

  20. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation.

    PubMed

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-05-23

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

  1. Utilizing Expert Knowledge in Estimating Future STS Costs

    NASA Technical Reports Server (NTRS)

    Fortner, David B.; Ruiz-Torres, Alex J.

    2004-01-01

    A method of estimating the costs of future space transportation systems (STSs) involves classical activity-based cost (ABC) modeling combined with systematic utilization of the knowledge and opinions of experts to extend the process-flow knowledge of existing systems to systems that involve new materials and/or new architectures. The expert knowledge is particularly helpful in filling gaps that arise in computational models of processes because of inconsistencies in historical cost data. Heretofore, the costs of planned STSs have been estimated following a "top-down" approach that tends to force the architectures of new systems to incorporate process flows like those of the space shuttles. In this ABC-based method, one makes assumptions about the processes, but otherwise follows a "bottoms up" approach that does not force the new system architecture to incorporate a space-shuttle-like process flow. Prototype software has been developed to implement this method. Through further development of software, it should be possible to extend the method beyond the space program to almost any setting in which there is a need to estimate the costs of a new system and to extend the applicable knowledge base in order to make the estimate.

  2. StatSTEM: An efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images.

    PubMed

    De Backer, A; van den Bos, K H W; Van den Broek, W; Sijbers, J; Van Aert, S

    2016-12-01

    An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Partial volume correction of brain perfusion estimates using the inherent signal data of time-resolved arterial spin labeling.

    PubMed

    Ahlgren, André; Wirestam, Ronnie; Petersen, Esben Thade; Ståhlberg, Freddy; Knutsson, Linda

    2014-09-01

    Quantitative perfusion MRI based on arterial spin labeling (ASL) is hampered by partial volume effects (PVEs), arising due to voxel signal cross-contamination between different compartments. To address this issue, several partial volume correction (PVC) methods have been presented. Most previous methods rely on segmentation of a high-resolution T1 -weighted morphological image volume that is coregistered to the low-resolution ASL data, making the result sensitive to errors in the segmentation and coregistration. In this work, we present a methodology for partial volume estimation and correction, using only low-resolution ASL data acquired with the QUASAR sequence. The methodology consists of a T1 -based segmentation method, with no spatial priors, and a modified PVC method based on linear regression. The presented approach thus avoids prior assumptions about the spatial distribution of brain compartments, while also avoiding coregistration between different image volumes. Simulations based on a digital phantom as well as in vivo measurements in 10 volunteers were used to assess the performance of the proposed segmentation approach. The simulation results indicated that QUASAR data can be used for robust partial volume estimation, and this was confirmed by the in vivo experiments. The proposed PVC method yielded probable perfusion maps, comparable to a reference method based on segmentation of a high-resolution morphological scan. Corrected gray matter (GM) perfusion was 47% higher than uncorrected values, suggesting a significant amount of PVEs in the data. Whereas the reference method failed to completely eliminate the dependence of perfusion estimates on the volume fraction, the novel approach produced GM perfusion values independent of GM volume fraction. The intra-subject coefficient of variation of corrected perfusion values was lowest for the proposed PVC method. As shown in this work, low-resolution partial volume estimation in connection with ASL perfusion estimation is feasible, and provides a promising tool for decoupling perfusion and tissue volume. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Information operator approach applied to the retrieval of vertical distributions of atmospheric constituents from ground-based FTIR measurements

    NASA Astrophysics Data System (ADS)

    Senten, Cindy; de Mazière, Martine; Vanhaelewyn, Gauthier; Vigouroux, Corinne; Delmas, Robert

    2010-05-01

    The retrieval of information about the vertical distribution of an atmospheric absorber from high spectral resolution ground-based Fourier Transform infrared (FTIR) solar absorption spectra is an important issue in remote sensing. A frequently used technique at present is the optimal estimation method. This work introduces the application of an alternative method, namely the information operator approach (Doicu et al., 2007; Hoogen et al., 1999), for extracting the available information from such FTIR measurements. This approach has been implemented within the well-known retrieval code SFIT2, by adapting the optimal estimation method such as to take into account only the significant contributions to the solution. In particular, we demonstrate the feasibility of the method when applied to ground-based FTIR spectra taken at the southern (sub)tropical site Ile de La Réunion (21° S, 55° E) in 2007. A thorough comparison has been made between the retrieval results obtained with the original optimal estimation method and the ones obtained with the information operator approach, regarding profile and column stability, information content and corresponding full error budget evaluation. This has been done for the target species ozone (O3), methane (CH4), nitrous oxide (N2O), and carbon monoxide (CO). It is shown that the information operator approach performs well and is capable of achieving the same accuracy as optimal estimation, with a gain of stability and with the additional advantage of being less sensitive to the choice of a priori information as well as to the actual signal-to-noise ratio. Keywords: ground-based FTIR, solar absorption spectra, greenhouse gases, information operator approach References Doicu, A., Hilgers, S., von Bargen, A., Rozanov, A., Eichmann, K.-U., von Savigny, C., and Burrows, J.P.: Information operator approach and iterative regularization methods for atmospheric remote sensing, J. Quant. Spectrosc. Radiat. Transfer, 103, 340-350, 2007. Hoogen, R., Rozanov, V.V., and Burrows, J.P.: Ozone profiles from GOME satellite data: description and first validation, J. Geophys. Res., 104(D7), 8263-8280, 1999.

  5. Merging Satellite Precipitation Products for Improved Streamflow Simulations

    NASA Astrophysics Data System (ADS)

    Maggioni, V.; Massari, C.; Barbetta, S.; Camici, S.; Brocca, L.

    2017-12-01

    Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. Therefore, we propose to merge SM2RAIN and the widely used TMPA 3B42RT product across Italy for a 6-year period (2010-2015) at daily/0.25deg temporal/spatial scale. Two conceptually different merging techniques are compared to each other and evaluated in terms of different statistical metrics, including hit bias, threat score, false alarm rates, and missed rainfall volumes. The first is based on the maximization of the temporal correlation with a reference dataset, while the second is based on a Bayesian approach, which provides a probabilistic satellite precipitation estimate derived from the joint probability distribution of observations and satellite estimates. The merged precipitation products show a better performance with respect to the parental satellite-based products in terms of categorical statistics, as well as bias reduction and correlation coefficient, with the Bayesian approach being superior to other methods. A study case in the Tiber river basin is also presented to discuss the performance of forcing a hydrological model with the merged satellite precipitation product to simulate streamflow time series.

  6. Concurrent estimates of carbon export reveal physical biases in ΔO2/Ar-based net community production estimates in the Southern California Bight

    NASA Astrophysics Data System (ADS)

    Haskell, William Z.; Fleming, John C.

    2018-07-01

    Net community production (NCP) represents the amount of biologically-produced organic carbon that is available to be exported out of the surface ocean and is typically estimated using measurements of the O2/Ar ratio in the surface mixed layer under the assumption of negligible vertical transport. However, physical processes can significantly bias NCP estimates based on this in-situ tracer. It is actively debated whether discrepancies between O2/Ar-based NCP and carbon export estimates are due to differences in the location of biological production and export, or the result of physical biases. In this study, we calculate export production across the euphotic depth during two months of upwelling in Southern California in 2014, based on an estimate of the consumption rate of dissolved organic carbon (DOC) and the dissolved: total organic carbon consumption ratio below the euphotic depth. This estimate equals the concurrent O2/Ar-based NCP estimates over the same period that are corrected for physical biases, but is significantly different than NCP estimated without a correction for vertical transport. This comparison demonstrates that concurrent physical transport estimates would significantly improve O2/Ar-based estimates of NCP, particularly in settings with vertical advection. Potential approaches to mitigate this bias are discussed.

  7. Inverse approach to estimating larval dispersal reveals limited population connectivity along 700 km of wave-swept open coast.

    PubMed

    Hameed, Sarah O; White, J Wilson; Miller, Seth H; Nickols, Kerry J; Morgan, Steven G

    2016-06-29

    Demographic connectivity is fundamental to the persistence and resilience of metapopulations, but our understanding of the link between reproduction and recruitment is notoriously poor in open-coast marine populations. We provide the first evidence of high local retention and limited connectivity among populations spanning 700 km along an open coast in an upwelling system. Using extensive field measurements of fecundity, population size and settlement in concert with a Bayesian inverse modelling approach, we estimated that, on average, Petrolisthes cinctipes larvae disperse only 6.9 km (±25.0 km s.d.) from natal populations, despite spending approximately six weeks in an open-coast system that was once assumed to be broadly dispersive. This estimate differed substantially from our prior dispersal estimate (153.9 km) based on currents and larval duration and behaviour, revealing the importance of employing demographic data in larval dispersal estimates. Based on this estimate, we predict that demographic connectivity occurs predominantly among neighbouring populations less than 30 km apart. Comprehensive studies of larval production, settlement and connectivity are needed to advance an understanding of the ecology and evolution of life in the sea as well as to conserve ecosystems. Our novel approach provides a tractable framework for addressing these questions for species occurring in discrete coastal populations. © 2016 The Author(s).

  8. Adaptive windowing and windowless approaches to estimate dynamic functional brain connectivity

    NASA Astrophysics Data System (ADS)

    Yaesoubi, Maziar; Calhoun, Vince D.

    2017-08-01

    In this work, we discuss estimation of dynamic dependence of a multi-variate signal. Commonly used approaches are often based on a locality assumption (e.g. sliding-window) which can miss spontaneous changes due to blurring with local but unrelated changes. We discuss recent approaches to overcome this limitation including 1) a wavelet-space approach, essentially adapting the window to the underlying frequency content and 2) a sparse signal-representation which removes any locality assumption. The latter is especially useful when there is no prior knowledge of the validity of such assumption as in brain-analysis. Results on several large resting-fMRI data sets highlight the potential of these approaches.

  9. Uncertainty and inference in the world of paleoecological data

    NASA Astrophysics Data System (ADS)

    McLachlan, J. S.; Dawson, A.; Dietze, M.; Finley, M.; Hooten, M.; Itter, M.; Jackson, S. T.; Marlon, J. R.; Raiho, A.; Tipton, J.; Williams, J.

    2017-12-01

    Proxy data in paleoecology and paleoclimatology share a common set of biases and uncertainties: spatiotemporal error associated with the taphonomic processes of deposition, preservation, and dating; calibration error between proxy data and the ecosystem states of interest; and error in the interpolation of calibrated estimates across space and time. Researchers often account for this daunting suite of challenges by applying qualitave expert judgment: inferring the past states of ecosystems and assessing the level of uncertainty in those states subjectively. The effectiveness of this approach can be seen by the extent to which future observations confirm previous assertions. Hierarchical Bayesian (HB) statistical approaches allow an alternative approach to accounting for multiple uncertainties in paleo data. HB estimates of ecosystem state formally account for each of the common uncertainties listed above. HB approaches can readily incorporate additional data, and data of different types into estimates of ecosystem state. And HB estimates of ecosystem state, with associated uncertainty, can be used to constrain forecasts of ecosystem dynamics based on mechanistic ecosystem models using data assimilation. Decisions about how to structure an HB model are also subjective, which creates a parallel framework for deciding how to interpret data from the deep past.Our group, the Paleoecological Observatory Network (PalEON), has applied hierarchical Bayesian statistics to formally account for uncertainties in proxy based estimates of past climate, fire, primary productivity, biomass, and vegetation composition. Our estimates often reveal new patterns of past ecosystem change, which is an unambiguously good thing, but we also often estimate a level of uncertainty that is uncomfortably high for many researchers. High levels of uncertainty are due to several features of the HB approach: spatiotemporal smoothing, the formal aggregation of multiple types of uncertainty, and a coarseness in statistical models of taphonomic process. Each of these features provides useful opportunities for statisticians and data-generating researchers to assess what we know about the signal and the noise in paleo data and to improve inference about past changes in ecosystem state.

  10. Measuring population health: costs of alternative survey approaches in the Nouna Health and Demographic Surveillance System in rural Burkina Faso

    PubMed Central

    Lietz, Henrike; Lingani, Moustapha; Sié, Ali; Sauerborn, Rainer; Souares, Aurelia; Tozan, Yesim

    2015-01-01

    Background There are more than 40 Health and Demographic Surveillance System (HDSS) sites in 19 different countries. The running costs of HDSS sites are high. The financing of HDSS activities is of major importance, and adding external health surveys to the HDSS is challenging. To investigate the ways of improving data quality and collection efficiency in the Nouna HDSS in Burkina Faso, the stand-alone data collection activities of the HDSS and the Household Morbidity Survey (HMS) were integrated, and the paper-based questionnaires were consolidated into a single tablet-based questionnaire, the Comprehensive Disease Assessment (CDA). Objective The aims of this study are to estimate and compare the implementation costs of the two different survey approaches for measuring population health. Design All financial costs of stand-alone (HDSS and HMS) and integrated (CDA) surveys were estimated from the perspective of the implementing agency. Fixed and variable costs of survey implementation and key cost drivers were identified. The costs per household visit were calculated for both survey approaches. Results While fixed costs of survey implementation were similar for the two survey approaches, there were considerable variations in variable costs, resulting in an estimated annual cost saving of about US$45,000 under the integrated survey approach. This was primarily because the costs of data management for the tablet-based CDA survey were considerably lower than for the paper-based stand-alone surveys. The cost per household visit from the integrated survey approach was US$21 compared with US$25 from the stand-alone surveys for collecting the same amount of information from 10,000 HDSS households. Conclusions The CDA tablet-based survey method appears to be feasible and efficient for collecting health and demographic data in the Nouna HDSS in rural Burkina Faso. The possibility of using the tablet-based data collection platform to improve the quality of population health data requires further exploration. PMID:26257048

  11. Population Estimation Using a 3D City Model: A Multi-Scale Country-Wide Study in the Netherlands

    PubMed Central

    Arroyo Ohori, Ken; Ledoux, Hugo; Peters, Ravi; Stoter, Jantien

    2016-01-01

    The remote estimation of a region’s population has for decades been a key application of geographic information science in demography. Most studies have used 2D data (maps, satellite imagery) to estimate population avoiding field surveys and questionnaires. As the availability of semantic 3D city models is constantly increasing, we investigate to what extent they can be used for the same purpose. Based on the assumption that housing space is a proxy for the number of its residents, we use two methods to estimate the population with 3D city models in two directions: (1) disaggregation (areal interpolation) to estimate the population of small administrative entities (e.g. neighbourhoods) from that of larger ones (e.g. municipalities); and (2) a statistical modelling approach to estimate the population of large entities from a sample composed of their smaller ones (e.g. one acquired by a government register). Starting from a complete Dutch census dataset at the neighbourhood level and a 3D model of all 9.9 million buildings in the Netherlands, we compare the population estimates obtained by both methods with the actual population as reported in the census, and use it to evaluate the quality that can be achieved by estimations at different administrative levels. We also analyse how the volume-based estimation enabled by 3D city models fares in comparison to 2D methods using building footprints and floor areas, as well as how it is affected by different levels of semantic detail in a 3D city model. We conclude that 3D city models are useful for estimations of large areas (e.g. for a country), and that the 3D approach has clear advantages over the 2D approach. PMID:27254151

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambie, F.W.; Yee, S.N.

    The purpose of this and a previous project was to examine the feasibility of estimating intermediate grade uranium (0.01 to 0.05% U/sub 3/O/sub 8/) on the basis of existing, sparsely drilled holes. All data are from the Powder River Basin in Wyoming. DOE makes preliminary estimates of endowment by calculating an Average Area of Influence (AAI) based on densely drilled areas, multiplying that by the thickness of the mineralization and then dividing by a tonnage factor. The resulting tonnage of ore is then multiplied by the average grade of the interval to obtain the estimate of U/sub 3/O/sub 8/ tonnage.more » Total endowment is the sum of these values over all mineralized intervals in all wells in the area. In regions where wells are densely drilled and approximately regularly spaced this technique approaches the classical polygonal estimation technique used to estimate ore reserves and should be fairly reliable. The method is conservative because: (1) in sparsely drilled regions a large fraction of the area is not considered to contribute to endowment; (2) there is a bias created by the different distributions of point grades and mining block grades. A conservative approach may be justified for purposes of ore reserve estimation, where large investments may hinge on local forecasts. But for estimates of endowment over areas as large as 1/sup 0/ by 2/sup 0/ quadrangles, or the nation as a whole, errors in local predictions are not critical as long as they tend to cancel and a less conservative estimation approach may be justified.One candidate, developed for this study and described is called the contoured thickness technique. A comparison of estimates based on the contoured thickness approach with DOE calculations for five areas of Wyoming roll-fronts in the Powder River Basin is presented. The sensitivity of the technique to well density is examined and the question of predicting intermediate grade endowment from data on higher grades is discussed.« less

  13. Using multi-locus allelic sequence data to estimate genetic divergence among four Lilium (Liliaceae) cultivars

    PubMed Central

    Shahin, Arwa; Smulders, Marinus J. M.; van Tuyl, Jaap M.; Arens, Paul; Bakker, Freek T.

    2014-01-01

    Next Generation Sequencing (NGS) may enable estimating relationships among genotypes using allelic variation of multiple nuclear genes simultaneously. We explored the potential and caveats of this strategy in four genetically distant Lilium cultivars to estimate their genetic divergence from transcriptome sequences using three approaches: POFAD (Phylogeny of Organisms from Allelic Data, uses allelic information of sequence data), RAxML (Randomized Accelerated Maximum Likelihood, tree building based on concatenated consensus sequences) and Consensus Network (constructing a network summarizing among gene tree conflicts). Twenty six gene contigs were chosen based on the presence of orthologous sequences in all cultivars, seven of which also had an orthologous sequence in Tulipa, used as out-group. The three approaches generated the same topology. Although the resolution offered by these approaches is high, in this case there was no extra benefit in using allelic information. We conclude that these 26 genes can be widely applied to construct a species tree for the genus Lilium. PMID:25368628

  14. Estimating the personal cure rate of cancer patients using population-based grouped cancer survival data.

    PubMed

    Binbing Yu; Tiwari, Ram C; Feuer, Eric J

    2011-06-01

    Cancer patients are subject to multiple competing risks of death and may die from causes other than the cancer diagnosed. The probability of not dying from the cancer diagnosed, which is one of the patients' main concerns, is sometimes called the 'personal cure' rate. Two approaches of modelling competing-risk survival data, namely the cause-specific hazards approach and the mixture model approach, have been used to model competing-risk survival data. In this article, we first show the connection and differences between crude cause-specific survival in the presence of other causes and net survival in the absence of other causes. The mixture survival model is extended to population-based grouped survival data to estimate the personal cure rate. Using the colorectal cancer survival data from the Surveillance, Epidemiology and End Results Programme, we estimate the probabilities of dying from colorectal cancer, heart disease, and other causes by age at diagnosis, race and American Joint Committee on Cancer stage.

  15. Accurate age estimation in small-scale societies

    PubMed Central

    Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Migliano, Andrea Bamberg; Thomas, Mark G.

    2017-01-01

    Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire. PMID:28696282

  16. Inertial Pocket Navigation System: Unaided 3D Positioning

    PubMed Central

    Munoz Diaz, Estefania

    2015-01-01

    Inertial navigation systems use dead-reckoning to estimate the pedestrian's position. There are two types of pedestrian dead-reckoning, the strapdown algorithm and the step-and-heading approach. Unlike the strapdown algorithm, which consists of the double integration of the three orthogonal accelerometer readings, the step-and-heading approach lacks the vertical displacement estimation. We propose the first step-and-heading approach based on unaided inertial data solving 3D positioning. We present a step detector for steps up and down and a novel vertical displacement estimator. Our navigation system uses the sensor introduced in the front pocket of the trousers, a likely location of a smartphone. The proposed algorithms are based on the opening angle of the leg or pitch angle. We analyzed our step detector and compared it with the state-of-the-art, as well as our already proposed step length estimator. Lastly, we assessed our vertical displacement estimator in a real-world scenario. We found that our algorithms outperform the literature step and heading algorithms and solve 3D positioning using unaided inertial data. Additionally, we found that with the pitch angle, five activities are distinguishable: standing, sitting, walking, walking up stairs and walking down stairs. This information complements the pedestrian location and is of interest for applications, such as elderly care. PMID:25897501

  17. Accurate age estimation in small-scale societies.

    PubMed

    Diekmann, Yoan; Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Page, Abigail E; Chaudhary, Nikhil; Migliano, Andrea Bamberg; Thomas, Mark G

    2017-08-01

    Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire.

  18. Health-related quality of life among adults 65 years and older in the United States, 2011-2012: a multilevel small area estimation approach.

    PubMed

    Lin, Yu-Hsiu; McLain, Alexander C; Probst, Janice C; Bennett, Kevin J; Qureshi, Zaina P; Eberth, Jan M

    2017-01-01

    The purpose of this study was to develop county-level estimates of poor health-related quality of life (HRQOL) among aged 65 years and older U.S. adults and to identify spatial clusters of poor HRQOL using a multilevel, poststratification approach. Multilevel, random-intercept models were fit to HRQOL data (two domains: physical health and mental health) from the 2011-2012 Behavioral Risk Factor Surveillance System. Using a poststratification, small area estimation approach, we generated county-level probabilities of having poor HRQOL for each domain in U.S. adults aged 65 and older, and validated our model-based estimates against state and county direct estimates. County-level estimates of poor HRQOL in the United States ranged from 18.07% to 44.81% for physical health and 14.77% to 37.86% for mental health. Correlations between model-based and direct estimates were higher for physical than mental HRQOL. Counties located in the Arkansas, Kentucky, and Mississippi exhibited the worst physical HRQOL scores, but this pattern did not hold for mental HRQOL, which had the highest probability of mentally unhealthy days in Illinois, Indiana, and Vermont. Substantial geographic variation in physical and mental HRQOL scores exists among older U.S. adults. State and local policy makers should consider these local conditions in targeting interventions and policies to counties with high levels of poor HRQOL scores. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. A Novel Methodology to Estimate the Treatment Effect in Presence of Highly Variable Placebo Response

    PubMed Central

    Gomeni, Roberto; Goyal, Navin; Bressolle, Françoise; Fava, Maurizio

    2015-01-01

    One of the main reasons for the inefficiency of multicenter randomized clinical trials (RCTs) in depression is the excessively high level of placebo response. The aim of this work was to propose a novel methodology to analyze RCTs based on the assumption that centers with high placebo response are less informative than the other centers for estimating the ‘true' treatment effect (TE). A linear mixed-effect modeling approach for repeated measures (MMRM) was used as a reference approach. The new method for estimating TE was based on a nonlinear longitudinal modeling of clinical scores (NLMMRM). NLMMRM estimates TE by associating a weighting factor to the data collected in each center. The weight was defined by the posterior probability of detecting a clinically relevant difference between active treatment and placebo at that center. Data from five RCTs in depression were used to compare the performance of MMRM with NLMMRM. The results of the analyses showed an average improvement of ~15% in the TE estimated with NLMMRM when the center effect was included in the analyses. Opposite results were observed with MMRM: TE estimate was reduced by ~4% when the center effect was considered as covariate in the analysis. The novel NLMMRM approach provides a tool for controlling the confounding effect of high placebo response, to increase signal detection and to provide a more reliable estimate of the ‘true' TE by controlling false negative results associated with excessively high placebo response. PMID:25895454

  20. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules.

    PubMed

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-12-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  1. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules.

    PubMed

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  2. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-12-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  3. Estimating the domain of applicability for machine learning QSAR models: a study on aqueous solubility of drug discovery molecules

    NASA Astrophysics Data System (ADS)

    Schroeter, Timon Sebastian; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert

    2007-09-01

    We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

  4. Improved estimation of random vibration loads in launch vehicles

    NASA Technical Reports Server (NTRS)

    Mehta, R.; Erwin, E.; Suryanarayan, S.; Krishna, Murali M. R.

    1993-01-01

    Random vibration induced load is an important component of the total design load environment for payload and launch vehicle components and their support structures. The current approach to random vibration load estimation is based, particularly at the preliminary design stage, on the use of Miles' equation which assumes a single degree-of-freedom (DOF) system and white noise excitation. This paper examines the implications of the use of multi-DOF system models and response calculation based on numerical integration using the actual excitation spectra for random vibration load estimation. The analytical study presented considers a two-DOF system and brings out the effects of modal mass, damping and frequency ratios on the random vibration load factor. The results indicate that load estimates based on the Miles' equation can be significantly different from the more accurate estimates based on multi-DOF models.

  5. Smooth extrapolation of unknown anatomy via statistical shape models

    NASA Astrophysics Data System (ADS)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  6. Audiovisual quality estimation of mobile phone video cameras with interpretation-based quality approach

    NASA Astrophysics Data System (ADS)

    Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte

    2007-01-01

    We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.

  7. Association between Stereotactic Radiotherapy and Death from Brain Metastases of Epithelial Ovarian Cancer: a Gliwice Data Re-Analysis with Penalization

    PubMed

    Tukiendorf, Andrzej; Mansournia, Mohammad Ali; Wydmański, Jerzy; Wolny-Rokicka, Edyta

    2017-04-01

    Background: Clinical datasets for epithelial ovarian cancer brain metastatic patients are usually small in size. When adequate case numbers are lacking, resulting estimates of regression coefficients may demonstrate bias. One of the direct approaches to reduce such sparse-data bias is based on penalized estimation. Methods: A re- analysis of formerly reported hazard ratios in diagnosed patients was performed using penalized Cox regression with a popular SAS package providing additional software codes for a statistical computational procedure. Results: It was found that the penalized approach can readily diminish sparse data artefacts and radically reduce the magnitude of estimated regression coefficients. Conclusions: It was confirmed that classical statistical approaches may exaggerate regression estimates or distort study interpretations and conclusions. The results support the thesis that penalization via weak informative priors and data augmentation are the safest approaches to shrink sparse data artefacts frequently occurring in epidemiological research. Creative Commons Attribution License

  8. A Bayesian Approach for Measurements of Stray Neutrons at Proton Therapy Facilities: Quantifying Neutron Dose Uncertainty.

    PubMed

    Dommert, M; Reginatto, M; Zboril, M; Fiedler, F; Helmbrecht, S; Enghardt, W; Lutz, B

    2017-11-28

    Bonner sphere measurements are typically analyzed using unfolding codes. It is well known that it is difficult to get reliable estimates of uncertainties for standard unfolding procedures. An alternative approach is to analyze the data using Bayesian parameter estimation. This method provides reliable estimates of the uncertainties of neutron spectra leading to rigorous estimates of uncertainties of the dose. We extend previous Bayesian approaches and apply the method to stray neutrons in proton therapy environments by introducing a new parameterized model which describes the main features of the expected neutron spectra. The parameterization is based on information that is available from measurements and detailed Monte Carlo simulations. The validity of this approach has been validated with results of an experiment using Bonner spheres carried out at the experimental hall of the OncoRay proton therapy facility in Dresden. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. A direct estimate of evapotranspiration over the Amazon basin and implications for our understanding of carbon and water cycling

    NASA Astrophysics Data System (ADS)

    Swann, A. L. S.; Koven, C.; Lombardozzi, D.; Bonan, G. B.

    2017-12-01

    Evapotranspiration (ET) is a critical term in the surface energy budget as well as the water cycle. There are few direct measurements of ET, and thus the magnitude and variability is poorly constrained at large spatial scales. Estimates of the annual cycle of ET over the Amazon are critical because they influence predictions of the seasonal cycle of carbon fluxes, as well as atmospheric dynamics and circulation. We estimate ET for the Amazon basin using a water budget approach, by differencing rainfall, discharge, and time-varying storage from the Gravity Recovery and Climate Experiment. We find that the climatological annual cycle of ET over the Amazon basin upstream of Óbidos shows suppression of ET during the wet season, and higher ET during the dry season, consistent with flux tower based observations in seasonally dry forests. We also find a statistically significant decrease in ET over the time period 2002-2015 of -1.46 mm/yr. Our direct estimate of the seasonal cycle of ET is largely consistent with previous indirect estimates, including energy budget based approaches, an up-scaled station based estimate, and land surface model estimates, but suggests that suppression of ET during the wet season is underestimated by existing products. We further quantify possible contributors to the phasing of the seasonal cycle and downward time trend using land surface models.

  10. Improving estimation of flight altitude in wildlife telemetry studies

    USGS Publications Warehouse

    Poessel, Sharon; Duerr, Adam E.; Hall, Jonathan C.; Braham, Melissa A.; Katzner, Todd

    2018-01-01

    Altitude measurements from wildlife tracking devices, combined with elevation data, are commonly used to estimate the flight altitude of volant animals. However, these data often include measurement error. Understanding this error may improve estimation of flight altitude and benefit applied ecology.There are a number of different approaches that have been used to address this measurement error. These include filtering based on GPS data, filtering based on behaviour of the study species, and use of state-space models to correct measurement error. The effectiveness of these approaches is highly variable.Recent studies have based inference of flight altitude on misunderstandings about avian natural history and technical or analytical tools. In this Commentary, we discuss these misunderstandings and suggest alternative strategies both to resolve some of these issues and to improve estimation of flight altitude. These strategies also can be applied to other measures derived from telemetry data.Synthesis and applications. Our Commentary is intended to clarify and improve upon some of the assumptions made when estimating flight altitude and, more broadly, when using GPS telemetry data. We also suggest best practices for identifying flight behaviour, addressing GPS error, and using flight altitudes to estimate collision risk with anthropogenic structures. Addressing the issues we describe would help improve estimates of flight altitude and advance understanding of the treatment of error in wildlife telemetry studies.

  11. Gene genealogies for genetic association mapping, with application to Crohn's disease

    PubMed Central

    Burkett, Kelly M.; Greenwood, Celia M. T.; McNeney, Brad; Graham, Jinko

    2013-01-01

    A gene genealogy describes relationships among haplotypes sampled from a population. Knowledge of the gene genealogy for a set of haplotypes is useful for estimation of population genetic parameters and it also has potential application in finding disease-predisposing genetic variants. As the true gene genealogy is unknown, Markov chain Monte Carlo (MCMC) approaches have been used to sample genealogies conditional on data at multiple genetic markers. We previously implemented an MCMC algorithm to sample from an approximation to the distribution of the gene genealogy conditional on haplotype data. Our approach samples ancestral trees, recombination and mutation rates at a genomic focal point. In this work, we describe how our sampler can be used to find disease-predisposing genetic variants in samples of cases and controls. We use a tree-based association statistic that quantifies the degree to which case haplotypes are more closely related to each other around the focal point than control haplotypes, without relying on a disease model. As the ancestral tree is a latent variable, so is the tree-based association statistic. We show how the sampler can be used to estimate the posterior distribution of the latent test statistic and corresponding latent p-values, which together comprise a fuzzy p-value. We illustrate the approach on a publicly-available dataset from a study of Crohn's disease that consists of genotypes at multiple SNP markers in a small genomic region. We estimate the posterior distribution of the tree-based association statistic and the recombination rate at multiple focal points in the region. Reassuringly, the posterior mean recombination rates estimated at the different focal points are consistent with previously published estimates. The tree-based association approach finds multiple sub-regions where the case haplotypes are more genetically related than the control haplotypes, and that there may be one or multiple disease-predisposing loci. PMID:24348515

  12. Modern control concepts in hydrology. [parameter identification in adaptive stochastic control approach

    NASA Technical Reports Server (NTRS)

    Duong, N.; Winn, C. B.; Johnson, G. R.

    1975-01-01

    Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  13. Comparison of the application of B-mode and strain elastography ultrasound in the estimation of lymph node metastasis of papillary thyroid carcinoma based on a radiomics approach.

    PubMed

    Liu, Tongtong; Ge, Xifeng; Yu, Jinhua; Guo, Yi; Wang, Yuanyuan; Wang, Wenping; Cui, Ligang

    2018-06-21

    B-mode ultrasound (B-US) and strain elastography ultrasound (SE-US) images have a potential to distinguish thyroid tumor with different lymph node (LN) status. The purpose of our study is to investigate whether the application of multi-modality images including B-US and SE-US can improve the discriminability of thyroid tumor with LN metastasis based on a radiomics approach. Ultrasound (US) images including B-US and SE-US images of 75 papillary thyroid carcinoma (PTC) cases were retrospectively collected. A radiomics approach was developed in this study to estimate LNs status of PTC patients. The approach included image segmentation, quantitative feature extraction, feature selection and classification. Three feature sets were extracted from B-US, SE-US, and multi-modality containing B-US and SE-US. They were used to evaluate the contribution of different modalities. A total of 684 radiomics features have been extracted in our study. We used sparse representation coefficient-based feature selection method with 10-bootstrap to reduce the dimension of feature sets. Support vector machine with leave-one-out cross-validation was used to build the model for estimating LN status. Using features extracted from both B-US and SE-US, the radiomics-based model produced an area under the receiver operating characteristic curve (AUC) [Formula: see text] 0.90, accuracy (ACC) [Formula: see text] 0.85, sensitivity (SENS) [Formula: see text] 0.77 and specificity (SPEC) [Formula: see text] 0.88, which was better than using features extracted from B-US or SE-US separately. Multi-modality images provided more information in radiomics study. Combining use of B-US and SE-US could improve the LN metastasis estimation accuracy for PTC patients.

  14. Sport fishing: a comparison of three indirect methods for estimating benefits.

    Treesearch

    Darrell L. Hueth; Elizabeth J. Strong; Roger D. Fight

    1988-01-01

    Three market-based methods for estimating values of sport fishing were compared by using a common data base. The three approaches were the travel-cost method, the hedonic travel-cost method, and the household-production method. A theoretical comparison of the resulting values showed that the results were not fully comparable in several ways. The comparison of empirical...

  15. Stemflow estimation in a redwood forest using model-based stratified random sampling

    Treesearch

    Jack Lewis

    2003-01-01

    Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...

  16. A program to form a multidisciplinary data base and analysis for dynamic systems

    NASA Technical Reports Server (NTRS)

    Taylor, L. W.; Suit, W. T.; Mayo, M. H.

    1984-01-01

    Diverse sets of experimental data and analysis programs have been assembled for the purpose of facilitating research in systems identification, parameter estimation and state estimation techniques. The data base analysis programs are organized to make it easy to compare alternative approaches. Additional data and alternative forms of analysis will be included as they become available.

  17. Statistical Theory for the "RCT-YES" Software: Design-Based Causal Inference for RCTs. NCEE 2015-4011

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2015-01-01

    This report presents the statistical theory underlying the "RCT-YES" software that estimates and reports impacts for RCTs for a wide range of designs used in social policy research. The report discusses a unified, non-parametric design-based approach for impact estimation using the building blocks of the Neyman-Rubin-Holland causal…

  18. Estimation of Standard Error of Regression Effects in Latent Regression Models Using Binder's Linearization. Research Report. ETS RR-07-09

    ERIC Educational Resources Information Center

    Li, Deping; Oranje, Andreas

    2007-01-01

    Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…

  19. Fast surface-based travel depth estimation algorithm for macromolecule surface shape description.

    PubMed

    Giard, Joachim; Alface, Patrice Rondao; Gala, Jean-Luc; Macq, Benoît

    2011-01-01

    Travel Depth, introduced by Coleman and Sharp in 2006, is a physical interpretation of molecular depth, a term frequently used to describe the shape of a molecular active site or binding site. Travel Depth can be seen as the physical distance a solvent molecule would have to travel from a point of the surface, i.e., the Solvent-Excluded Surface (SES), to its convex hull. Existing algorithms providing an estimation of the Travel Depth are based on a regular sampling of the molecule volume and the use of the Dijkstra's shortest path algorithm. Since Travel Depth is only defined on the molecular surface, this volume-based approach is characterized by a large computational complexity due to the processing of unnecessary samples lying inside or outside the molecule. In this paper, we propose a surface-based approach that restricts the processing to data defined on the SES. This algorithm significantly reduces the complexity of Travel Depth estimation and makes possible the analysis of large macromolecule surface shape description with high resolution. Experimental results show that compared to existing methods, the proposed algorithm achieves accurate estimations with considerably reduced processing times.

  20. Sensor fusion for structural tilt estimation using an acceleration-based tilt sensor and a gyroscope

    NASA Astrophysics Data System (ADS)

    Liu, Cheng; Park, Jong-Woong; Spencer, B. F., Jr.; Moon, Do-Soo; Fan, Jiansheng

    2017-10-01

    A tilt sensor can provide useful information regarding the health of structural systems. Most existing tilt sensors are gravity/acceleration based and can provide accurate measurements of static responses. However, for dynamic tilt, acceleration can dramatically affect the measured responses due to crosstalk. Thus, dynamic tilt measurement is still a challenging problem. One option is to integrate the output of a gyroscope sensor, which measures the angular velocity, to obtain the tilt; however, problems arise because the low-frequency sensitivity of the gyroscope is poor. This paper proposes a new approach to dynamic tilt measurements, fusing together information from a MEMS-based gyroscope and an acceleration-based tilt sensor. The gyroscope provides good estimates of the tilt at higher frequencies, whereas the acceleration measurements are used to estimate the tilt at lower frequencies. The Tikhonov regularization approach is employed to fuse these measurements together and overcome the ill-posed nature of the problem. The solution is carried out in the frequency domain and then implemented in the time domain using FIR filters to ensure stability. The proposed method is validated numerically and experimentally to show that it performs well in estimating both the pseudo-static and dynamic tilt measurements.

  1. Identifiability and Performance Analysis of Output Over-sampling Approach to Direct Closed-loop Identification

    NASA Astrophysics Data System (ADS)

    Sun, Lianming; Sano, Akira

    Output over-sampling based closed-loop identification algorithm is investigated in this paper. Some instinct properties of the continuous stochastic noise and the plant input, output in the over-sampling approach are analyzed, and they are used to demonstrate the identifiability in the over-sampling approach and to evaluate its identification performance. Furthermore, the selection of plant model order, the asymptotic variance of estimated parameters and the asymptotic variance of frequency response of the estimated model are also explored. It shows that the over-sampling approach can guarantee the identifiability and improve the performance of closed-loop identification greatly.

  2. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    PubMed

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  3. EMSAR: estimation of transcript abundance from RNA-seq data by mappability-based segmentation and reclustering.

    PubMed

    Lee, Soohyun; Seo, Chae Hwa; Alver, Burak Han; Lee, Sanghyuk; Park, Peter J

    2015-09-03

    RNA-seq has been widely used for genome-wide expression profiling. RNA-seq data typically consists of tens of millions of short sequenced reads from different transcripts. However, due to sequence similarity among genes and among isoforms, the source of a given read is often ambiguous. Existing approaches for estimating expression levels from RNA-seq reads tend to compromise between accuracy and computational cost. We introduce a new approach for quantifying transcript abundance from RNA-seq data. EMSAR (Estimation by Mappability-based Segmentation And Reclustering) groups reads according to the set of transcripts to which they are mapped and finds maximum likelihood estimates using a joint Poisson model for each optimal set of segments of transcripts. The method uses nearly all mapped reads, including those mapped to multiple genes. With an efficient transcriptome indexing based on modified suffix arrays, EMSAR minimizes the use of CPU time and memory while achieving accuracy comparable to the best existing methods. EMSAR is a method for quantifying transcripts from RNA-seq data with high accuracy and low computational cost. EMSAR is available at https://github.com/parklab/emsar.

  4. Modeling Fetal Weight for Gestational Age: A Comparison of a Flexible Multi-level Spline-based Model with Other Approaches

    PubMed Central

    Villandré, Luc; Hutcheon, Jennifer A; Perez Trejo, Maria Esther; Abenhaim, Haim; Jacobsen, Geir; Platt, Robert W

    2011-01-01

    We present a model for longitudinal measures of fetal weight as a function of gestational age. We use a linear mixed model, with a Box-Cox transformation of fetal weight values, and restricted cubic splines, in order to flexibly but parsimoniously model median fetal weight. We systematically compare our model to other proposed approaches. All proposed methods are shown to yield similar median estimates, as evidenced by overlapping pointwise confidence bands, except after 40 completed weeks, where our method seems to produce estimates more consistent with observed data. Sex-based stratification affects the estimates of the random effects variance-covariance structure, without significantly changing sex-specific fitted median values. We illustrate the benefits of including sex-gestational age interaction terms in the model over stratification. The comparison leads to the conclusion that the selection of a model for fetal weight for gestational age can be based on the specific goals and configuration of a given study without affecting the precision or value of median estimates for most gestational ages of interest. PMID:21931571

  5. Bayesian Framework Approach for Prognostic Studies in Electrolytic Capacitor under Thermal Overstress Conditions

    DTIC Science & Technology

    2012-09-01

    make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma

  6. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  7. Quantum Chemical Approach to Estimating the Thermodynamics of Metabolic Reactions

    PubMed Central

    Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán

    2014-01-01

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism. PMID:25387603

  8. A Unified Estimation Framework for State-Related Changes in Effective Brain Connectivity.

    PubMed

    Samdin, S Balqis; Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain

    2017-04-01

    This paper addresses the critical problem of estimating time-evolving effective brain connectivity. Current approaches based on sliding window analysis or time-varying coefficient models do not simultaneously capture both slow and abrupt changes in causal interactions between different brain regions. To overcome these limitations, we develop a unified framework based on a switching vector autoregressive (SVAR) model. Here, the dynamic connectivity regimes are uniquely characterized by distinct vector autoregressive (VAR) processes and allowed to switch between quasi-stationary brain states. The state evolution and the associated directed dependencies are defined by a Markov process and the SVAR parameters. We develop a three-stage estimation algorithm for the SVAR model: 1) feature extraction using time-varying VAR (TV-VAR) coefficients, 2) preliminary regime identification via clustering of the TV-VAR coefficients, 3) refined regime segmentation by Kalman smoothing and parameter estimation via expectation-maximization algorithm under a state-space formulation, using initial estimates from the previous two stages. The proposed framework is adaptive to state-related changes and gives reliable estimates of effective connectivity. Simulation results show that our method provides accurate regime change-point detection and connectivity estimates. In real applications to brain signals, the approach was able to capture directed connectivity state changes in functional magnetic resonance imaging data linked with changes in stimulus conditions, and in epileptic electroencephalograms, differentiating ictal from nonictal periods. The proposed framework accurately identifies state-dependent changes in brain network and provides estimates of connectivity strength and directionality. The proposed approach is useful in neuroscience studies that investigate the dynamics of underlying brain states.

  9. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  10. Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?

    PubMed

    Hey, Spencer Phillips; Kimmelman, Jonathan

    2016-10-01

    Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach. © 2016 John Wiley & Sons Ltd.

  11. Modeling and Optimization for Morphing Wing Concept Generation

    NASA Technical Reports Server (NTRS)

    Skillen, Michael D.; Crossley, William A.

    2007-01-01

    This report consists of two major parts: 1) the approach to develop morphing wing weight equations, and 2) the approach to size morphing aircraft. Combined, these techniques allow the morphing aircraft to be sized with estimates of the morphing wing weight that are more credible than estimates currently available; aircraft sizing results prior to this study incorporated morphing wing weight estimates based on general heuristics for fixed-wing flaps (a comparable "morphing" component) but, in general, these results were unsubstantiated. This report will show that the method of morphing wing weight prediction does, in fact, drive the aircraft sizing code to different results and that accurate morphing wing weight estimates are essential to credible aircraft sizing results.

  12. Estimating Causal Effects in Mediation Analysis Using Propensity Scores

    ERIC Educational Resources Information Center

    Coffman, Donna L.

    2011-01-01

    Mediation is usually assessed by a regression-based or structural equation modeling (SEM) approach that we refer to as the classical approach. This approach relies on the assumption that there are no confounders that influence both the mediator, "M", and the outcome, "Y". This assumption holds if individuals are randomly…

  13. A soil moisture accounting-procedure with a Richards' equation-based soil texture-dependent parameterization

    USDA-ARS?s Scientific Manuscript database

    Given a time series of potential evapotranspiration and rainfall data, there are at least two approaches for estimating vertical percolation rates. One approach involves solving Richards' equation (RE) with a plant uptake model. An alternative approach involves applying a simple soil moisture accoun...

  14. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    PubMed

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  15. Predicting Loss-of-Control Boundaries Toward a Piloting Aid

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.

  16. Estimating infertility prevalence in low-to-middle-income countries: an application of a current duration approach to Demographic and Health Survey data.

    PubMed

    Polis, Chelsea B; Cox, Carie M; Tunçalp, Özge; McLain, Alexander C; Thoma, Marie E

    2017-05-01

    Can infertility prevalence be estimated using a current duration (CD) approach when applied to nationally representative Demographic and Health Survey (DHS) data collected routinely in low- or middle-income countries? Our analysis suggests that a CD approach applied to DHS data from Nigeria provides infertility prevalence estimates comparable to other smaller studies in the same region. Despite associations with serious negative health, social and economic outcomes, infertility in developing countries is a marginalized issue in sexual and reproductive health. Obtaining reliable, nationally representative prevalence estimates is critical to address the issue, but methodological and resource challenges have impeded this goal. This cross-sectional study was based on standard information available in the DHS core questionnaire and data sets, which are collected routinely among participating low-to-middle-income countries. Our research question was examined among women participating in the 2013 Nigeria DHS (n = 38 948). Among women eligible for the study, 98% were interviewed. We applied a CD approach (i.e. current length of time-at-risk of pregnancy) to estimate time-to-pregnancy (TTP) and 12-month infertility prevalence among women 'at risk' of pregnancy at the time of interview (n = 7063). Women who were 18-44 years old, married or cohabitating, sexually active within the past 4 weeks and not currently using contraception (and had not been sterilized) were included in the analysis. Estimates were based on parametric survival methods using bootstrap methods (500 bootstrap replicates) to obtain 95% CIs. The estimated median TTP among couples at risk of pregnancy was 5.1 months (95% CI: 4.2-6.3). The estimated percentage of infertile couples was 31.1% (95% CI: 27.9-34.7%)-consistent with other smaller studies from Nigeria. Primary infertility (17.4%, 95% CI: 12.9-23.8%) was substantially lower than secondary infertility (34.1%, 95% CI: 30.3-39.3%) in this population. Overall estimates for TTP >24 or >36 months dropped to 17.7% (95% CI: 15.7-20%) and 11.5% (95% CI: 10.2-13%), respectively. Subgroup analyses showed that estimates varied by age, coital frequency and fertility intentions, while being in a polygynous relationship showed minimal impact. The CD approach may be limited by assumptions on when exposure to risk of pregnancy began and methodologic assumptions required for estimation, which may be less accurate for particular subgroups or populations. Unrecognized pregnancies may have also biased our findings; however, we attempted to address this in our exclusion criteria. Limiting to married/cohabiting couples may have excluded women who are no longer in a relationship after being blamed for infertility. Although probably rare in this setting, we lack information on couples undergoing infertility treatment. Like other TTP measurement approaches, pregnancies resulting from contraceptive failure are not included, which may bias estimates. Nationally representative estimates of TTP and infertility based on a clinical definition of 12 months have been limited within developing countries. This approach represents a pragmatic advance in our ability to measure and monitor infertility in the developing world, with potentially far-reaching implications for policies and programs intended to address reproductive health. There are no competing interests and no financial support was provided for this study. Financial support for Open Access publication was provided by the World Health Organization. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology.

  17. Vehicle Sprung Mass Estimation for Rough Terrain

    DTIC Science & Technology

    2011-03-01

    distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended

  18. The sensitivity of length of pregnancy using clinical estimate of gestation versus last menstrual period: an example with particulate matter and preterm birth

    EPA Science Inventory

    Estimating gestational age is usually based on last menstrual period date (LMP) or clinical estimation (CGA); both approaches introduce error and potential bias. Differences in the two methods of gestational age assignment may lead to misclassification and differences in risk est...

  19. Robustness of Value-Added Analysis of School Effectiveness. Research Report. ETS RR-08-22

    ERIC Educational Resources Information Center

    Braun, Henry; Qu, Yanxuan

    2008-01-01

    This paper reports on a study conducted to investigate the consistency of the results between 2 approaches to estimating school effectiveness through value-added modeling. Estimates of school effects from the layered model employing item response theory (IRT) scaled data are compared to estimates derived from a discrete growth model based on the…

  20. A variational approach to parameter estimation in ordinary differential equations.

    PubMed

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

Top